Hmm…well, this thread has certainly taken off since the last time I read it. I’m not sure that I was thinking entirely clearly at that point (but I’m in the middle of exam week, so I’ve got an excuse). I need to read this, and I probably won’t get a chance to do that tonight. However, good points have been made, and I’m definitely interested in seeing where the discussion goes.
Boy, when describe this as the place for long running debates they weren’t kidding!
I a past thread, Science as Religion, we covered this. I’ll agree with Jerevan and stand by my definition of science in that thread:
As Attrayant points out, it aims to build a testable body of knowledge. It may not hit the mark yet still be science.
What is wrong with the op? It emphasizes the technician as the scientist. Sorry, but the lab rat collecting data without critically analyzing it is no scientist. He is part of a team that is doing science, maybe. Einstein, on the other hand, doing a thought experiment and theorizing, collecting no data by instrumental measures, was doing science. tris is mistaken, I think, science is the goal (formulating ever more accurate representations, ever more apt analogies and metaphors for reality), not the method.
Tris said quantifiable, and although he did not use instruments, Einstein relied heavily on the measurments, and quantified characterists of the behavior of light in his work. Scientists have goals. Science is the method they use to reach for those goals.
Tris
Let’s see if I can summarize the criticism thus far:[ul][li]My definition fails to include “repeatability” as a characteristic of the scientific method.[/li]
[li]I need to either 1) define what I mean by “instrument” further, or 2) drop it from the definition entirely, since it fails to characterize the fundamental feature of scientific research.[/li]
[li]My definition focuses too exclusively on scientific methods, and fails to account for theorizing, which is also a central element in the work of scientists.[/li]
[li]Popper’s “falsification criterion” represents a better definition of science than my own.[/li]
[li]My use of the word “instrument” can lead the reader to confuse my definition with “instrumentalism,” a specific philosophy of science that “denies any physical reality.” (Nice one, rsa.)[/ul]Did I miss anything?[/li]
I’ll try to address these point one by one, although not necessarily in order. In particular, I’ll leave the discussion of falsification, which might get a bit hairy, until last. Also, I probably have to attack this over the course of several posts.
Repeatability:
There are two reasons behind my decision to exclude “repeatability” as an element of my definition of science. The first is that I’m not convinced that all scientific activity is really related to repeatability in any meaningful sense. Repeatability is really a normative characteristic of experimental laboratory work. A lot of scientific work, on the other hand, is observational and/or organizational in nature. Consider taxonomy, for example: is it based on the repeatability of experimental measurement?
Certainly, the observations upon which taxonomy are based could be considered repeatable, I suppose, but somehow I feel this fails to grasp the real essence of what’s going on. Or let’s take the argument over T-Rex: was he a predator, or a scavenger? Those who claim the latter do not in any sense base their arguments on “repeatability” (nor “falsifiability,” for that matter). Rather, their claims are based on inferences from several different points of data: what Whewell called “the consilience of inductions.” It will be noted, however, that regardless of where one stands in the debate over T-Rex (the dinosaur, mind you, not the rock group), one attempts to bolster one’s claims by considering the various morphological measurements of the creature in question.
My second reason for rejecting “repeatability” as a criterion is that it messes up the tight, well-balanced prose of my definition. I mean, think about it:
Science: the investigation of Nature by means of instrumental measurement, and, like, repeatable (?).
Ugh. No way, Dude.
Back with more, later.
Lib:
I meant no offense. My “shameless hijacker” accusation was meant in the joshing, elbow-in-the-ribs, good humor kinda way.
That said, the fact of the matter is that, as far as I know, I have in my OP simply offered up a definition of science for critical review. I have made absolutely no judgement whatsoever regarding its epistemological priority, or the epistemological priority of the natural sciences in general. Since I reject the falsification criterion, I reject as well the claim that it represents anything particularly foundational in science (or “scientific methodology,” if you prefer). If anything you’re criticizing Popper, not my definition.
The reason I reacted so quickly to your post is that I’ve run into this claim (i.e., the falsification criterion isn’t falsifiable itself, and therefore moot), on numerous occasions. I think it’s a fairly cheap misconstrual of Popper’s sometimes quite subtle arguments. Popper states, perhaps somewhat disingenuously, that he is not interested in the question of “meaning;” rather, he is interested in a method of differentiating scientific statements from all other kinds of statements. More precisely, he is interested in solving Hume’s dilemma regarding inductive reasoning. Thus, at least in the text of this particular argument, which can be found in his Conjectures and Refutations (pg. 32 and ff.), he specifically rejects the sort of comparison you seem to want to make between science and non-science.
I agree that there is a hybris underlying Popper’s text, and that his arguments have been exploited afterwards to justify the epistemological monopoly of the natural sciences as means of knowledge production – it’s just that such observations don’t really have much to do with the OP.
I’m well familiar with Popper, and actually Wittgenstein’s “criterion of meaningfulness” is what bothered him most and motivated him to philosophize about science. As Popper said, musing on Wittgenstein’s work in the book you cited, “This [work] amounts to a crude verifiability criterion of demarcation”. Contrary to what you say, he sought to firm up the difference between science and psuedoscience.
After a rather famous correspondence with Adler, Popper decided that psychology could not be a science in the real sense because its theories were too interpretive. Thus, he wrote: “I could not think of any human behaviour that could not be interpreted in terms of either [Freud or Adler’s] theory. It was precisely this fact – that they always fitted, that they were always confirmed – which in the eyes of their admirers constituted the strongest argument in favour of these theories. It began to dawn on me that this apparent strength was in fact their weakness.”
He thought that scientific propositions ought to be “risky”, and that is what eventually led to his principle of falsifiability. He dispensed with Hume’s problem of induction fairly easily by declaring the practice of induction to be psychologically intrinsic. That led him to formulate a skeletal description of the scientific method by correlating induction with observation. He wrote, “This was a theory of trial and error – of conjectures and refutations. It made it possible to understand why our attempts to force interpretations upon the world were logically prior to the observation of similarities.”
Definitions are notoriously full of holes simply because of the tautological nature of language. Each word in your definition has a definition of its own, composed of other words. And each word in those has its own definition, and so on until a definition must, of necessity, eventually regress to become a restatement of itself. That’s why definitions are listed first in formal logic proofs, even before axioms, because they are identities.
It is a very tricky exercise to make a definition neither too broad nor too narrow. Yours is too narrow for the reasons shown in this thread. And as Spiritus has pointed out, the ultimate instrument of any scientific inquiry is the human brain. Inasmuch as consciousness is a closed reference frame, you cannot — try as you might — divorce science from subjective interpretation or moral implication. Niether could Popper.
“Instruments”:
I insist upon the inclusion of “instrumental measurement” in my definition because I feel that the gradual introduction and development of measurement by instruments/apparatuses is really what characterizes the visible historical features of the scientific endeavor. Instrumentation is particularly relevant from an historical perspective; a great deal of the growth/progress of scientific knowledge has been predicated upon the development of more and more sophisticated methods for “measuring Nature” with instruments. This is particularly relevant with regard to measuring time, which was 1) absolutely essential to most advances in physics, and 2) an engineering nightmare of the worst kind.
To understand why I place such emphasis on this point, I suggest the following thought experiment: imagine how we would perceive the world, and how we would investigate Nature, without the aid of instrumental measurement. We would have no way of precisely measuring time, comparing weights, or calculating distance. Temperature, as well, would be something of mystery to us. As Thomas Kuhn points out:
In other words, I argue that without the gradual incorporation of measuring instruments by science, our worldview would be radically different. The use of instrumental measurement has come to have a profound influence on the shape of our cognitive landscape. I want to say, perhaps incorrectly, but anyway, that by placing instrumentation between ourselves and Nature, and relying upon that instrumentation to provide us with information, our “picture of the world” has gradually become “mechanized.” The last couple of sentences in the quote from Kuhn, above, really captures the kind of “gestalt switch” that I feel had to be made by researchers in this process. It seems second nature to us now, merely because we grow up in a world predicated upon this kind of knowledge.
Compare a 15th century bestiary with a modern textbook in zoology. The first contains pictures of fabulous, unheard of creatures, half real, half the product of some fevered imagination. The second contains an orderly progression of tables, measurements, graphs, statistical research results, and so forth. What is the difference? Perhaps it is an oversimplification, but I submit that the main difference is one of methods: in the 15th century we interrogated Nature with our equivocal subjects; today, we do so with measuring instruments.
I studied psychology for a couple of years in college, before moving on to greener pastures. My disenchantment with the field at the time was caused by the fact that, despite the application of what might be considered “scientific principles” to the study of human behavior, very little knowledge of value was generated. Granted, the department in which I studied was heavily behaviorist; but that only served to more clearly confirm the problem. I remember in particular one of my professors, who seemed to positively revel in the fact that his behavioral studies, and behaviorism in general, could at best produce only trivial knowledge about human subjects. Human behavior was too complicated, with to many uncontrollable factors, to be successfully tested in a laboratory setting, he claimed, and I’m inclined to agree. So what was the allure of behaviorism?
I submit that what drew researchers to this particular paradigm is that it held out the possibility of applying “scientific principles,” specifically, instrumental measurements, to the study of human behavior. That such an application was barren of theoretical/informational yield was, at least for a long time, irrelevant. I mean after all, compared to say, psychoanalysis, at least behaviorism was scientific. What I discovered in behaviorism was, in fact, the ritualistic application of science to a field of study that, in all likelihood, is not particularly amenable to scientific methods of study. That’s what got me to thinking about all of this stuff. In many fields of study, the “scientific method” is kind of a culture ritual that the researcher performs, often fully aware that it will not produce particularly accurate, or interesting, results.
Spiritus:
I do not include human sensory apparatus as an element of this class. I’m not sure what you mean by the above, but am interested in any counter-examples of scientific research that rely exclusively on “subjective impressions,” rather than on instrumental measurement, as their primary methodology.
I’m still not quite sure, fully, how to define instruments. Measurements from instruments form the shared consensual background against which subjective impressions can be compared so as to create an agreed-upon picture of Nature. Thus, any man-made apparatus that is used, in any way, to measure a natural phenomena should be included, of course. I also want to include survey forms, questionnaires, and such truck. Forms used in statistics to measure, for example, popular opinions, would fall under my definition of an instrument.
I’ll try to address your claim concerning the willingness of science to relinquish falsified theories in a later post.
Finally, to dispense as well with the last objection (posted by rsa): I want to make it clear that “instrumental measurement” refers to the actual use of physical instruments in the production of scientific knowledge, and not the “instrumental” (as opposed to, let us say, the “representational”) qualities of an accepted scientific theory. The idea behind the definition is to avoid passing judgement on the scientificity of theories, and to focus exclusively on methods.
Lib:
I don’t mean in any sense to imply anything about your familiarity with Popper, since I have no idea how much of his stuff you’ve read. I agree in general with your summary, above, except for one curious point:
Actually, as Popper himself writes in the paper referred to, he came to this conclusion after an encounter with Adler; specifically, Popper was working for Adler in a marital guidance clinical in a Viennese suburb, in 1919. He reported to Adler the case of a child whom he (Popper) felt did not fit into Adler’s theories regarding “masculine protest,” but Adler quickly analyzed the case in terms of his theory anyway. When the somewhat amazed young Popper asked, “How do you know that your analysis is correct?” Adler responded haughtily, “Because of my thousand-fold experience.” Popper didn’t buy it.
I’m really curious as to where you found that second quote by Popper. Is this also in C and R? I’ve never read it before, but makes him sound almost like Kuhn. Anyway, as you are no doubt aware, the falsification criterion, if accepted, neatly side-steps all possible inductive fallicies.
**Yes, if we must be absolute about it, but your thesis would seem to make even simple communication totally impossible. I was hoping not to have to move so deeply into the field of epistemology, and just kinda stay on the surface and present a “working definition” without too much philosophical belly-button inspecting.
I think you’re being a bit premature here. May I at least be allowed a rebuttal before we decide if I’m right or wrong?
I would agree that the ultimate instrument of inquiry is the human brain, but I still feel at the intuitive level that one can nevertheless categorize this brain’s modes of inquiry into scientific and non-scientific. Since we regularly categorize knowledge products into one of the proceeding two boxes, it is not unreasonable to discuss their meaning. I’m trying to argue that when the human brain investigates its surroundings by means of instrumental measurement, we can call that science. Other methods of investigation may or may not be equally valid, but fail to fall into the category of scientific research.
I wouldn’t dream of trying. Ultimately, all knowledge is mediated by the subject, usually (although not always) in the form of a narrative. But at the risk of sounding repetitive, the question remains: is that knowledge acquired from direct interaction with Nature, or through the filter of a measuring device? Again, I submit that the answer to this inquiry allows us to sort the knowledge in question into the class “scientific,” or into the class, “non-scientific.” I do not think it allows us, on the other hand, to assess its validity as knowledge: race biology was founded on very scientific principles, for example, and definitely relied on instrumental measurement as its primary means of investigation. It thus falls into my definition of “science” – but I doubt anyone on this board would seriously argue in favor of its validity. This interesting argument, should it be agreed upon, shows that science is not always a noble, or beneficial, pursuit.
P.S. Hiya, DSeid! Welcome to the debate!
Svin: “gestalt switch”… that wouldn’t have anything to do with paradigm shifts, would it?
I’ve been trying to figure out a way of fitting Kuhn into your definition and into this discussion, and it’s as good a way of doing it as any, I suppose.
The critiques that were made of your definition of “instrumental measurement” are, in my opinion, still valid. What defines an “instrument”? All the examples you’ve given so far are examples of tools used in quantitative research, but unless you’re willing to argue that all research can be and must be quantitative you’re going to have to deal with the problem of trying to fit numbers into qualitative research. I’ve done a small bit of study into one branch of qualitative research, ethnography, and one of the chief reasons that that particular type of research methodology exists is because numbers, frankly, do a poor job of explaining some forms of human activity. The kind of observation through language that is necessary for qualitative research, although containing problems of its own, can conceivably be more useful in certain situations and is intrinsic to ethnographic field research. This can lead to the kind of “researcher’s senses as instrument” situations that Spiritus was referring to earlier. Maybe we can’t trust them as much as numbers, but that’s what repeatability (and methodological scrutiny in terms of validity) is designed to address. By creating intersubjectivity, you grow increasingly closer to objectivity. You can never quite get there, but close enough so that it barely matters.
Then again, I’m not quite convinced you’ve incorporated qualititative study into your definition. Unless you believe that simply “isn’t science”, in which case you’ve just done a finer job of attacking the social sciences then Popper ever did.
Kuhn, as well, might be useful for explaining a problem that people have been having, which is the distinction between non-science and bad science. Even if astrology is a science by Popper’s definition, that leaves it open to falsifiability, which has historically taken place. The enduring popularity of “bad sciences” that nonetheless remain is, in my opinion, due to their internal consistency as paradigms. Although they have anomalies that have caused them to fall out of favour as hegemonic or competing paradigms, those willing to overlook or explain away those anomalies cling to the paradigm for whatever reason, and therefore “bad science” sticks around. The relative ease with which one can “explain away” the anomalies helps determine how likely it is that paradigms stay around, which probably has a lot to do with the strong hegemonic paradigms of the “hard” sciences (whose anomalies stick out like a sore thumb) and the multitude of competing paradigms of the “soft” social sciences, where it’s fairly difficult to completely and utterly falsify a theory. Witness the enduring popularity of Marxism.
(I’ve gotta come down on the side of science being, however, about theories and not methods. The whole point is proving or disproving a theory; the method is merely the means by which does that. Math isn’t necessarily a science, but it’s an extremely powerful method by which scientific investigation can take place.)
And Svin, philosophical belly-button inspecting is in the end the basis of all of this. All sciences, physical or social, are based on philosophy in one sense or another. Without the assumption of repeatability, science becomes utterly useless, but what is the assumption of repeatability but another form of the philosophical belief that, yes, the sun will come up tomorrow ? One of the other revolutions that predicated the historical growth of the sciences was the growth of Deism in Europe; without the “clockmaker God”, how can one be sure that a personal Deity won’t just change the rules while you’re not looking?
As to your bestiary example… as far as I can remember, the first man to engage in an orderly classification of plants and animals was Aristotle. A fine chap and instrumental to any debate of science, philosophy, ethics, politics and everything else, but unfortunately our millenium can’t lay claim to him, and nor can the Enlightenment.
eris: I was thinking of the phlogiston theory of heat & combustion, not caloric. But some quick digging this morning indicates that, while the phlogiston theory had been called into serious question on its own merit by the mid-18th century, it was not finally put to rest until the work of Lavoisier, et al. developed a theory which brought oxygen into the picture of combustion in the late-18th, early-19th centuries. So I cannot offer that as a theory which was falsified without a better, more suitable theory taking its place.
On astrology as science (or not): The OP’s position here is at least consistent with his definition, which seeks to exclude other fields of study in which the primary work is interpretation is not derived from investigation of natural phenomenon with instrumental methods – any more than, say, psychology which he also excludes from his definition. This has been, actually, my primary “beef” with those who exclude astrology from the realm of Science, but include psychology and economics. Many objections about the first apply equally well to the second and third. So goes the joke: “Economists have predicted nine out of the last five recessions.”
More later, perhaps. There are some pretty heavy posts to wade through!
Jerevan: again, the problem is that “instrument” really means “quantitative”… there’s no instrument that can measure “red” or “happy” or “cold” or “honour”; at best they can find something that’s relatively close. Hence the implied division between the “real” sciences (which attempt to measure things like “red” or “cold” by imperfectly translating the concepts into variables for numerical instrumental observations) and the “fake” sciences (which attempt to study “happy” and “honour” which either cannot be quantified or can only be quantified with great difficulty).
Economics, of course, is the weird “little bit of column a, little bit of column b” example precisely because the particular subset of human behavior it addresses actually can be quantified, given the right assumptions. Since this allows for powerful modelling techniques and an aura of “scientific” respectibility, this goes a long way towards explaining the economic determinism that dominates modern society.
The mention of Wittgenstein and instrumentalism have made some interesting thoughts pop into my head. For example, in LW’s On Certainty he investigates the phenomenon of asserting that one “knows” something. That in an investigation of knowledge one must have things for which “doubt” is meaningless. Not, for instance (to use his example), that I cannot doubt that I have a hand, but if I were to doubt I had a hand I could not make sure by looking (that certain types of doubt become, well, all-encompassing; and, with credit to Jerevan, asks “What stands fast? And who decides what stands fast?”[paraphrased]). This actually seems to tie in to the minor hijack in the “God/Gods” thread, in a way, in trying to seperate a “falsifiable but ultimately unrovable theory” from “data we’ve aquired that fits the theory,” when the data is only interpreted from within the theory, so which is the data and which is the interpretation? As Lib notes, the scientific method strengthened by Popper is itself not a flasifiable theory. Does this mean science itself is not open to scientific investigation? It is, for instance, a theory about theories. But that doesn’t mean that it isn’t open to falsification, for if it accepts a theory that we know is wrong, then this methodology is wrong. But… here’s the sticker… where did we get the knowledge from that a theory/data set is right if not from the scientific method?
So the scientific method itself is not falsifiable in the sense of “falsifiable” meaning “making empirical predictions which can be isolated and observed.”
Oops, that can’t be right, because we’ve just let in some interpretation of instrumentation via “observation” that is itself not subject to doubt. Or is it? If it is, what can assert its truth?
No, it isn’t theoretically necessary to assert the truth of theories and data. But when we don’t, how are we not bogged down in instrumentalism anyway? If we accept that there is a physical reality that we can know, but the scientific method can’t let us know, what can the scientific method do, and what do we know about the external world that isn’t a result from our instrument (not yet rigorously defined)?
Well, would Popper say “nothing”? We cannot gain knowledge from incomplete induction in the emprical sense (AFAIK, and last time I noted this no one claimed the contrary, though please correct me if I am wrong). Now, I strongly disagree with this, not because I think he is analytically wrong, but that he forces a definition of scientific knowledge which really goes against the bulk of scientific work. If we never knew that our instruments work we would never get around to examining the data they release, because we compare the data to the theory, and the instrument to the data, and this makes each experiment its own little circle. When we say the data is bad it is because we assume the intrument is not, and I don’t mean “we have a high but not absolute level of confidence in the intrument.” I find that analysis of instrumental work to be absurd, quite frankly.
Can someone please explain why Popper’s definition avoids instrumentalism? It really isn’t intuitively clear to me. In fact, it seems he is forced there by the requirement of falsifiability.
http://www.drury.edu/ess/philsci/popper.html
(underlining in original removed)
I agree with this explanation (from what I know of Popper). By failing to assert the existence of discernable empirical truth we are reduced to listening to what our experiments—and their instruments—tell us, with the Theory as the dictionary in which the data has meaning. The data is meaningless outside the theory—not something I would quibble with—and because we both require falsifiability to be a factor of science and we cannot prove a negative we must accept that we can never tell whether science tells us anything about the world. The existence of physical reality is a moot point.
Jerevan, I am impressed with your searching, but I think we’ll have a hard time finding much because of the stubborn creature that man is.
Mr. Svinlesha, have we honed down what we mean by “instrument” yet? Did I miss it?
eris: (laughing) Don’t be overly impressed. I did a quick flip through a book on the history of chemistry which sits on my office bookshelf! You were probably right when you said (earlier) that most theories aren’t considered “falsified” until something better is proposed instead. I would point out, though, that while this might be the way things often happen, the scientific method doesn’t require it. I think this aspect of scientific endeavor is a human behavior, as you seem to suggest; a reaction to having a near-and-dear theory challenged or demolished. “Oh yeah? So how do you explain it, smarty-pants?” And your extrapolations from my “what is science, and who gets to decide?” questions are spot on – in that, if we aren’t very careful, we end up making circular arguments.
Would you mind expanding on this a bit? I don’t quite grasp your point – especially the last sentence. Could you provide with a definition of instrumentalism, in the way you are using the term?
Granted… but note my early objections in this thread to the idea that science must be quantitative. Often it is, but I am not convinced that this is always necessary. But also granted, that I have yet to support this assertion adequately and I’m still sorting through my thoughts in an effort to do so – or abandon it, as any good scientist ought to.
OK, I understand the division which you outline. Does this mean that you don’t consider psychology to be a science? Or if you do, what does psychology quantify about the individual? I ask because (a) I do not honestly know whether psychology involves any kind of quantification of personality or individual behavior; and (b) psychology is usually the “real science” by which the allegedly “fake science” of astrology is measured and judged. If psychology does not quantify anything and so cannot be considered a “real science”, then it’s in the same boat with astrology, to my way of thinking.
Do forgive this minor hijack; it isn’t my intention to drag this thread into a discussion specifically about astrology’s scientific merits (or lack thereof). Been there, done that, liked it but had enough, in another GD thread a while back. But I am curious to know your thoughts.
Instrumentalism:[ul]A philosophy of science which judges the worth of a theory by its fit with empirical evidence but requires no understanding of causal correlation.[/ul]
Also: [ul]The doctrine that scientific theories are not true descriptions of an unobservable reality, but merely useful instruments which enable us to order and anticipate the observable world. Traditional versions of instrumentalism were influenced by verificationist theories of meaning, and held that theoretical claims about unobservables cannot be regarded as literally meaningful. More recent versions of instrumentalism are motivated by sceptical rather than semantic arguments: they allow that scientists can make meaningful claims about an unobservable world, but deny that we should believe those claims. One motivation for this kind of sceptical instrumentalism is the ‘under-determination of theory by evidence’. However, realist opponents of instrumentalism can respond that the compatibility of different theories with the observational evidence does not mean those theories are all equally well supported by that evidence. A better argument for sceptical instrumentalism is probably the ‘pessimistic meta-induction’, which argues that, since past scientific theories have all proved false, we can expect present and future theories to prove false too.[/ul]
And finally: [ul]Philosophical theory, developed in the late 19th century by J. Dewey, which holds that beliefs, hypotheses, etc. are instruments with which we engage with the world in which we live, and are therefore justified simply to the extent that they are successful and fruitful.[/ul]
That’s a mouthful. Notice they sort of depend on the context in which they are used.
Instrumentalism:[ul]A philosophy of science which judges the worth of a theory by its fit with empirical evidence but requires no understanding of causal correlation.[/ul]
Also: [ul]The doctrine that scientific theories are not true descriptions of an unobservable reality, but merely useful instruments which enable us to order and anticipate the observable world. Traditional versions of instrumentalism were influenced by verificationist theories of meaning, and held that theoretical claims about unobservables cannot be regarded as literally meaningful. More recent versions of instrumentalism are motivated by sceptical rather than semantic arguments: they allow that scientists can make meaningful claims about an unobservable world, but deny that we should believe those claims. One motivation for this kind of sceptical instrumentalism is the ‘under-determination of theory by evidence’. However, realist opponents of instrumentalism can respond that the compatibility of different theories with the observational evidence does not mean those theories are all equally well supported by that evidence. A better argument for sceptical instrumentalism is probably the ‘pessimistic meta-induction’, which argues that, since past scientific theories have all proved false, we can expect present and future theories to prove false too.[/ul]
And finally: [ul]Philosophical theory, developed in the late 19th century by J. Dewey, which holds that beliefs, hypotheses, etc. are instruments with which we engage with the world in which we live, and are therefore justified simply to the extent that they are successful and fruitful.[/ul]
That’s a mouthful. Notice they sort of depend on the context in which they are used.
Thank you. I need to re-read these several times to digest the meanings and contexts, but what strikes me right away:
Very much like the undergraduate colleague I mentioned in the That Other Thread, who said that it didn’t matter whether scientific theory represented the underlying reality so long as the observable world behaved as if this “scientific” representation were real. An attractive notion… but what do you do with an observation which science can’t adequately model? Disbelieve the observation? We have scientists here who do just that! :rolleyes:
Now I will have to go back and re-read earlier posts where instrumentalism was invoked.
Popper in fact responded, “And with this new case, I suppose, your experience has become thousand-and-one-fold.”
It is from his 1962 paper, Science, Pseudo-Science, and Falsifiability. It begins, “The problem which troubled me at the time was neither, ‘When is a theory true?’ nor, ‘When is a theory acceptable?’ My problem was different. I wished to distinguish between science and pseudo-science; knowing very well that science often errs, and that pseudo-science may happen to stumble on the truth.”
I’m sure you can find a copy online somewhere.
With respect to inductive fallacies, once again I feel it necessary to point out that falsifiability is itself an induction. That is philosophically problematic.
Communication is not impossible. Exact communication, however, is. That’s the whole reason for my very first post, the one you ignored.
Then am I to understand that you consider the entire field of Quantum Mechanics to be something other than scientific research?
Well, I agree that you are not sure what I mean. Rest assured that I did not mean, “subjective impressions are sufficient for scientific research.” Allow me to restate. Human sensory impressions have been historically, and remain, a valid element of scientific inquiry. If you exclude them from your class “Instruments”, then your definition is both historically and functionally flawed.
Since you like taxonomy as an example, I wonder how many words indicating direct human sensory impressions can be found on a random page of any decent field guide to birds. For the chemists among us, consider the litmus test and how it is evaluated. Can we agree that reflecting telescopes are still useful instruments of scientific inquiry (perhaps even for recording teh magnitude of stars)? Is chaos theory a scientific endeavor?
Substitute “sensory impressions” for “measurement from instruments” and your sentence remains true (though strangely phrased). Perhaps more tellingly, one might counter that sensory impressions form the shared consensual background against which instruments can be calibrated so as to create an agreed upon picture of nature. To use someone’s earlier example of the temperature scale–how exactly did Kelvin determine when water was undergoing a state change in order to set the “anchors” of his scale?
I think the “quantitative vs qualitative” dichotomy that some are pointing to is at best only tangential to this point. Chemistry, astronomy, physics, all of these “quantitative” sciences relied heavily upon human sensory impressions for their development and continue to make use of human senses today.
I would appreciate that. I think it is the more significant of the points I raised, actually.
Lib
It’s a minor point, since I agree with your position (though I also agree it is a bit of a hijack to the OP), but I don’t see how falsifiability is an induction. Can you sketch out the inductive process which leads to the conclusion “falsifiable”?
erl
I’ve never considered Popper as Instrumentalist before. Of the 3 definitions you cite I would say:
[ol][li]Very close to Popper, though I would quibble with the assignment of “worth” to the simple correspondence with empirical evidence. Popper, to the best of my knowledge, never framed his arguments in terms of worth and I doubt that eh would have agreed with this valuation.[/li][li]Not very close. Popper’s criterion of falsifiability says nothing about the ultimate truth value of any scientific theory. A theory which has not yet been falsified may or may not be true. It is not accurate to paint Popper as arguing that no scientific theory could possibly be a true description of reality. I also do not recall reading that Popper denied statements which were not subject to falsification could be meaningful, or even whether people “should” believe such statements. As the phrase quoted by Lib above demonstrates, he was well aware that non-scientific statements could be true and scientific statements could be false.[/li][li]Way off. Unless one defines “successful and fruitful in the world in which we live” as “not yet falsified”. Newtonian mechanics is a very successful and fruitful theory in my phenomenological world. More successful, for my needs, than relativity. (Ease of computation and modeling does make a difference.) I don’t think Popper would agree that I would be justified in believing Newton over Einstein.[/ol][/li]Admitedly, it has been years since I read anything by Popper and I have hardly kept up with the nuances of philosophical definitions of science (Kuhn bored me to tears, I’m afraid.) Still, you asked, and I felt compelled to type. Basically, I would say that the distinction between Popper and the instrumentalists is one of focus: epistemology versus metaphysic.
Before continuing to argue my point, I want to reply to some of the criticisms others have posted in this thread so far.
Poccacho:
Einstein may or may not have been doing good science, but if you accept Popper’s criterion, then you certainly aren’t. According to Popper, there is not such thing as experimental confirmation. So, to be technically correct, Einstein’s theories have never been confirmed – they’ve merely survived some falsification tests.
Or have they, really? In order to derive such tests, one must infer downward from Einstein’s relativity theory to an observation that it predicts. Unfortunately, such observations can usually be explained in terms of two or more competing, higher-level theories. I know, at the very least, that the glitch in Mercury’s orbit can be explained in reference to other high-level theories in physics, even if it is also predicted by Einstein’s Special Relativity. Some philosophers of science insist that any observation can always be interpreted in the light of several competing theoretical models, and thus that scientists must chose the one model they prefer on the basis of qualities other than mere “falsifiability.” Sociologists argue that these choices are more often than not predicated on “social negotiations” among scientists – on the intuitive sense among a majority of scientists that one particular explanatory model is somehow “more correct” than another, a sense that is only tenuously connected to the empirical evidence at hand. That these scientists afterwards argue their choices are evidentially based is another kettle of fish altogether, but their insistence creates the illusion that science is a much more objective and value-neutral pursuit than it actually is.
Though you and Popper may argue Einstein was doing “good science” by Popper’s standards, I seriously doubt that claim. Einstein developed a theory in physics, and published it, long before anyone could figure out what it implied observationally. The observational tests came afterwards, almost as an afterthought, and one wonders what might have happened to the theory if scientists had not been able to derive a few relatively straight-forward observable predictions from it. In other words, Einstein‘s theory was not developed primarily with an eye towards its usefulness in predicting observation statements – it just luckily happened to predict some.
It took a few years to figure out what those observational predictions might look like, and even as late as 1920, only two falsifiable observations had been derived from Relativity – it explained the troublesome glitch in Mercury’s orbit, and it predicted that the bending of light caused by the sun’s gravitation would cause stars near the sun’s periphery to appear “displaced” during a solar eclipse. Despite this dearth of evidential support, Einstein’s theory was already accepted by the vast majority of the scientific community.
There are many theories today from which we cannot yet derive falsifiable observation statements, and that therefore cannot be considered “scientific” by Popper’s criterion. However, if we were to develop the technology necessary to falsify the theory tomorrow, they would suddenly become a “scientific theories.” Thus, one problem (among the many) with Popper’s criterion is that it seems to relate “scientificity” to the level of technological development of society, rather than to mere “falsifiability.”
**I wouldn’t dream of jumping to such a conclusion, and such was not the intention behind my remark. In fact, I would argue in favor of “historicism,” myself, because I don’t agree with Popper’s analysis.
My point was merely that Popper tended to dismiss fields such as sociology, ethnology, history, and so forth as “piffle” because they do not produce falsifiable hypotheses. On the other hand, your wife’s research sounds very interesting and I suspect that Popper might even consider it to be scientific. Since it apparently involves measuring various properties with precision instruments, it would also fall under the category of science in my definition of the word.
Jerevan (in a discussion with erl):
If you feel the need to reinvent the wheel, Jerevan, then please be my guest. But otherwise, perhaps I can save you some time and trouble by pointing out that it is pretty much taken for granted now by historians of science that a theory is never abandoned until a better (or at least, alternative) explanation exists to take its place. Usually, there are several competing bodies of theory in any given field – some that are more accepted than others.
I’m curious about that last paragraph in the passage quoted above. Since all scientific research is carried out by human beings, how can you separate the “humaness” of the endeavor from its methods? Isn’t that something of an artificial division?
DSeid:
**Hmmm…this may be more of a semantic quibble, but anyway…well, again, the reason behind my insistence on “the instrumental measurement of Nature” in my definition resides in the fact that I consider scientific research to be primarily empirical in nature. My imaginary scientist, such as would fall under my definition, would probably respond: “Einstein’s theories and thought experiments in all honor, but if you can’t test ‘em in the real world, then they ain’t worth diddly. After all, what separates the knowledge I produce from mere philosophy is the fact that mine is empirically tested against Nature.”
Considering this, Einstein does not fall into my definition as a scientist. Perhaps a great mathematician, or theoretician – but the scientist is that fellow out there in the white lab coat, doing all the grunt-work. This may or may not be a weakness in my definition.
Demos:
Yup.
Nope.
Yup.
Yup.
(Side note: While aware of this, I was hoping to “head off at the pass,” as it were, the sort of Wittgensteinian debates that often develop between erl, Lib, and Spiritus. Such epistemological deconstructions make baby Svinlesha’s brain melt. I would rather discuss my definition than discuss the question, “to what extent is it possible to define a thing?”)
Oh, I see. We’re going to be like that, are we? I hate it when someone confronts me with the historical record.
Tendencies towards a “mechanization of the world picture” can in fact be traced even back to before Aristotle; different explanations of the world ran parallel for a long time, until we began to measure it with instruments – which in turn created an agreed upon standard that every subject could relate to. The process can also be conceptualized as the “externalization” of measurement, if you like.
More later.
Oh, one last thing – erl, I thing the observation that Popper’s criterion is a kind of default instrumentalism is absolutely smashing.
Spiritus:
I have nothing in particular against what I perceive to be your strictly phenomenological approach to this question, nor would I deny that all knowledge is mediated, ultimately, by the subject. But judging from your response, I suspect that we are nevertheless talking at cross-purposes, somewhat…
Well, I don’t, actually, since I feel that it comes dangerously close to toppling my definition. I keep wishing some professional taxonomist would walk into this thread and say something like, “Man, I do nothing all day but measure…”
Yup. Could be a problem. I would like to counter by claiming that when such field guides were originally compiled, they were constructed on the basis of in-depth morphological studies that, in their turn, used various sorts of measurements (weight, wingspread, and so forth) as their fundamental method of sorting different birds into their appropriate species category. Unfortunately, I don’t know enough about the historical development of field guides to make such a claim. But the kicker here is color and pattern, anyway: that birds are most often identified on the basis of their plumage, and that this, in its turn, is direct Nature-to-senses research. Which is a problem, maybe.
But then again, maybe not. As I understand it, the entire taxonomic chart is undergoing a profound re-write even as we speak, on the basis of DNA studies. It has been discovered that some creatures, even though morphologically similar, are genetically quite dissimilar; and others that are morphologically distinct are nevertheless genetically similar. On the basis of these studies, it is becoming apparent to evolutionary biologists that the morphological similarities that present themselves directly to our senses are a poor guide in determining the way in which various species and subspecies are related to each other, and the chart is being reorganized so as to reflect more accurately the genetic relationship among them. (This last may be poorly worded, but I hope you get my general idea.) Since the DNA structure of a creature is not something that presents itself directly to our senses, but is rather a characteristic that can only be determined by means of a specific measurement process in a laboratory, this would again place taxonomy under my definition of science. With some luck, it might also relegate bird-watching to a “craft,” rather than a science.
The reason I’m wasting so much bandwidth on this particular issue is that I think it also reflects back upon what I’m arguing is the basic gist, if you will, of the Scientific Project. If my assertions above are factually correct, then what we see is, basically, “science in action.” Equivocal subjective impressions, even if they are cross-referenced by means of intersubjectivity and systematically categorized, are nevertheless gradually being replaced by “more accurate” standardized, and precise, instrumental measurements. This is exactly the historical progress of science that I believe constitutes its essential spirit, and its visible contours: a process whereby, step by step, more and more fields of inquiry are “conquered” by establishing quantified standards of measurement within them. Central to this process is the “scientific instrument,” in all its myriad forms, which allows very different subjects (i.e., researchers) to agree upon a single reference point, use it in their work, and create a shared vocabulary around it.
The litmus strip, in this instance, is the “instrument of measurement.” It would be much more difficult for you chemists to discuss the ph value of a solution if you measured it by simply sticking your hand into the vat, wouldn’t it?
To reiterate: ultimately, all knowledge is mediated by the subject. Someone is looking at all those spots on the graph. What differentiates scientific inquiry from other forms of inquiry, I claim, is that while the latter derives info by other means, science derives it by subjectively inspecting the results presented by a measuring instrument.
Absolutely. How do you measure the magnitude of star, exactly?
**You probably know more about this than I do, but anyway, I happen to have Prigogine’s book, Order from Chaos, right here in front of me. It’s chock full of graphs, tables, measurements, and so forth.
**Actually, if we were to simply substitute “sensory impressions” for “measurement,” we might be on the same note: “Sensory impressions from instruments form the shared consensual background against which subjective impressions can be compared so as to create an agreed-upon picture of Nature.”
Nah. I think this is backwards, see? The problem with sensory impressions alone is that they fail to create a “shared consensual background.” That’s why, IMO, scientists attempt to eliminate them. The calibrated instrument, on the other hand, can use almost any random event (like, for example, the boiling point of water) as standard of measurement. Once the standard is agreed upon (and the instrument is “black-boxed”), it falls into the background of the scientific discourse and remains, almost inevitably, part of the agreed-upon framework that allows science to progress to more interesting experiments/observations. It also allows for “repeatability,” and enables a researcher in Tokyo to share his experimental results with a colleague in Chicago.
I hope to eventually get to your second point, but please be patient – it may take some time!
I didn’t mean to hijack. The Opening Poster went out of his way to make the point at the end of his post that “any attempt to define ‘science,’ or demarcate it from other forms of knowledge production, must concentrate on methods…”. That led to a discussion of the scientific method, which then led to a discussion of Popper and falsifiability.
I think the best sketch of how induction leads to falsification is Popper’s own observations on Marxism, Freud and Adler, and Einstein.
For example, he noticed that whenever Marxism failed to make correct economic predictions, its adherents would append ad hoc hypotheses in order to make it compatible with the facts. At about the time that he was studying the theories of Freud and Adler, he attended a lecture in Vienna that Einstein gave on Relativity. What struck him was that, while the psychoanalytic theories were amenable solely to confirmation, Einstein’s theory was undergoing spirited criticism. From these particular experiences, along with subsequent observations, he eventually concluded that a theory that had testable implications could be falsified. And from this, he derived his principles of demarcation that he believed separated science from pseudoscience.
He drew certain quid facti particulars into a generality, which is the classical definition of induction.
Arrrrgh!
So many intelligent points to respond to, and so little time!!!
First off, can we please establish which meaning of “instruments” Mr. S. has in mind? erl’s definitions open up lots of possibilities which I doubt were in the intention of the op.
I suspect that he means exclusively science’s use of technology, and certainly science tends to use tools to accomplish it goals. But such tools, while increasingly commonly associated with “doing science”, are only extensions of our perceptions and are not integral to “science”. Instruments help extend the range of what we can measure (and thus model) beyond the usual range of our perceptual apparatii; they help reduce inter-observer and event-event variability, which allows for more informed and predictive models to be created; they allow for the use of a more precise vocabulary - so that each discussant means the same thing when they use a word, rather than left to individual meanings for words such as “hot”. But the act of instrumental measurement is not science.
The point of science is the development of the best possible models, or metaphors, about the world. The better it fits past observations and predicts future ones, the more variety of data that it explains, the more likely it is to be accepted and to continue as the reigning hypothesis. Measurement by some means is required to have the data, and the data is what the metaphor must fit and predict. But the instrument is not the science. Hubbel telescope, collecting data, is not a scientist, despite the fact that it is “interrogating nature by means of instrumental measurements.” The science is the creation of the explanation for the data and the willingness to discard (or at least modify) the explanation if it no longer predicts new data and/or if a different metaphor comes along that is a better fit.