"Hard and fast" ... in science, you lecher you!

DSeid: This strays from the OP, but it is a subject in which I am really interested (and I am interested in devoting much of my life to working on it).

I think that we are not really far apart in the long run. IMHO all of this shifting and redefining is hard sciences percolating up into the clinics and up into morphologic taxonomy. As this happens, old mistakes are corrected.

In clinics, I will agree that phenotype (and therefore disease) is a many layered thing. It is a polyvariable situation; any number of factors can exacerbate or mitigate a pathology, and therefore push the balance from health to disease. You focus on the phenotype, I focus on the pathology. I think that the pathology is easy to define, it is easy to put a hard and fast box around it, and that this, in the future, will be enormously relevant to treat the disease.

The future. Let’s say that a 20 year old patient comes into your office with a sore wrist, which has been persistently sore for a month, and a little bit of fatigue. With a red-capped tube of her blood, you send for a panel of DNA testing, serum electrophoresis, and PCR and blood cultures for infectious agents. It could come back in any number of interesting ways. Maybe she has an infection, and you would give an antibiotic specific for the causative agent. Maybe what you are seeing is the first symptom of SLE with anti-dsDNA. You treat that with appropriate immunosuppressives or a specific drug which lowers the anti-dsDNA. Or rheumatoid arthritis. Or maybe she has an allele which predisposes her to early onset osteoarthritis, or osteomas, or osteosarcomas. Treat as such. Or perhaps, she just forgot to tell you that she fell on it and sprained it: the prostaglandins in her blood tell the story equally as well. Back to the standby NSAIDs and ice.

The DNA tests, the serum proteins, the infectious agent tests all define relatively simple pathologies with complicated phenotypes. Yeah I know it sounds less fun than what we do right now, but in the end, you have cured a disease rather than just treat the symptoms.

What about multifactorial disease? Alzheimer disease, breast cancer, Parkinson disease? We should still be able to make lists of alleles giving disease susceptibility. We should still be able to define the diseases not only genetically, but with precise biochemical tests (i.e. PD type 10B = general faults in the ubiquitination pathway in the dopaminergic neurons of the substantia nigra), and treat as such. Cancer in the same way, but since genetic changes take place in the cancerous tissue, the same analyses will be done on the cancerous and surrounding tissues as well as the normal ones.

The science behind diagnosis is focused on putting disorders into neater boxes based on their pathology. The question of how the pathology causes the disorders is still mostly unanswered. But the treatments of the future will focus on the pathology, unlike the treatments of today, which usually (apart from antibiotics and a few others) treat the symptoms. The treatment will be determined by the neat box of pathology, no matter how complicated the phenotype turns out to be. Or at least in the edwino wonder-world of the future. The ethics are still a matter of contention.

I maintain no illusions that this will be possible with every disorder, that everything will be “hard and fast.” People will still get sick in new ways, new disorders will crop up, “wastebasket” diagnoses will still be around, in which we can find no underlying pathology. Other diseases primarily of aging will still occur in “normal” people, due primarily to a lifetime of environmental exposure (I put it in quotes because as our resolution goes up, chances are no-one will be completely normal). In these cases, we try to define the disorder by the symptoms and treat them, just like we do today. Hopefully, these will be the exceptions rather than the rule.

You don’t have to dig that far down the taxonomy to find shifting. Look at the jump from 2 to five kingdoms, and the seven kingdom model. (i’m from Illinois originally, U of I came up with it IIRC, so it was mentioned in high school)

So crabs infected with Sacculina carcini are no longer crabs, but Sacculina carcini? I always imagine them as zombie crabs that live only to serve their barnacle masters.

For those of you unfamiliar with Sacculina carcini (i’ll wager everyone, as you all probably have lives), S. carcini is a parasitic barnacle. The female injects a few of its cells into the crab, and those cells promptly take over the crabs body, making it sterile, and if the crab is male, turning it female. (the rest of the barnacle’s husk dies) S. carcini* then invites a male to join in (two, actually), and they mate, and their offspring are carried in the eggsack area that the crab would have used for it’s own offspring. The crab takes care of the barnacle eggs like it’s own, and even helps the young disperse when it is time for hatching.

I am using fertility as a gross measure of genomic integrity. If an individual either can or at one point could have reproduced, it is still the same species. I am talking about the genome, not immature individuals, or individuals who have acquired disease, or had surgical intervention, or underwent menopause. Fertility is just a by-product of genomic integrity for my purposes.

So let me restate: If an individual is not genetically infertile, then that individual is of the same species. Genetic infertility being the inability to produce normal gametes due to large reorganizations in the genetic material. I will also include large changes in genital shape (another proposed mechanism of speciation) as genetic infertility.

Thanks for the clarification. but how would you rate animals that have variable genital shapes so that certain member of the species can only reproduce with certain others, but their offspring will have differently shaped genitals from their parents and the same shape as others in their species. I don’t know if this happens, i seem to recall a bunch of weird bugs back from evolution, but i didn’t have time to take the invert class. But for human purposes say someone had a very large member and could only mate with women who could deal with the larger member, but the offspring had smaller members and could potentially mate with any normal human. What would you classify this as?

So my patient with a deletion of the sex related homeobox (I think it was on a tip of chromosome 9?) who is genetically infertile is not human? My patient with Turner’s syndrome? Not by the definition offered up so far. You get my point.

I’d like to continue with your clinical digression though. Your edwino wonder-world of a neat box around every pathology and a massive panel screening for every box. Let us leave alone our disagreement about the likelihood of these neat boxes (vs phenotypes as stable network positions from a variety of paths). Let’s presume your neat boxes. Let us focus on the consequences of lab screening.

Presume that your tests are very reliable… Let us also presume that each neat box has a small prevelance, say one out of 10,000. (Probably a lot smaller given how small each neat box will have to be.) We’ll give it a 99% sensitivity (odds of testing positive, given being positive) and 99% specificity (odds of testing negative, given being negative). Pretty damn good test, huh? How often will your test tell us that a patient has a condition when they really do not?

99 times out of a hundred a positive test will be in error. Do the math and remember Baye’s Law. Screening for hundreds of conditions at a time will result in almost every patient having a positive result which will be false 99% of the time.

Beware of an over reliance on testing. They are best used to confirm what you are already very suspicious of based on your history and PE, not for fishing expeditions.

(emphasis mine)
I want to come back to this since it addresses the op. Sounds fuzzy to me. He is human or he aint, gimme the bright line. You don’t really mean that “human” means the ability to “reproduce with a human and produce viable, fertile offspring” but that such is a stand-in for “large reorganizations in the genetic material”? How large is large enough? How about huge translocations and deletions that nevertheless spare reproductive ability vs small point deletions of sex differention locii that cause infertility? Is either human? Both? I want my hard and fast line or else “human” is not a worthwhile or valid scientific concept and therefore should be discarded.

I’m familiar with them - They’re my all-time favorite parasite :p. Ah, brings back fond memories of taking Marine Invert. Zoo., lo those many years ago.

That’s all, I have nothing relevant to add, though I’m finding the discussion interesting :).

  • Tamerlane

DSeid: Both your points on the clinical aside and the infertility/definition of human are very, very good and cut right to the heart of the matter. Large scale genomic reorganization (at least right now) is one of the major driving event (if not the major driving event) in speciation. So, genetic infertility is right at the edge.

I must be honest, I need some time to think of good answers and if my definitions hold up. Let me give some thought to your questions while poring over restriction maps, radioactive filters and DNA sequence this week. I have an extraordinarily busy week, so even if I were to post, I think that most of my stuff would be less than insightful (like this post…)

I will say that defining a class doesn’t mean you can’t have subclasses. Defining a class doesn’t mean that everything has to fit into one class or another. We can say that something is human or not human, and toss everything that’s an ambiguous case. Perhaps the goal of scientific categorization is not to be inclusive so much as to be exclusive. I have used fertility as a gross measure of genomic stability to set the definition as human. This excludes a lot of people, but I don’t see anything wrong with that. The original Linnaeus taxon holds, even if none of his others really have (besides being roughly correlated with evolutionary clades). And we can draw a neat box around it, even if it does include things like large transpositions (but are the offspring fertile and viable?) and excludes point mutations in the SRY gene or whatever. At least it is a definition, and we can put every person into the class or not into the class.

I will think about the other cases this week. I apologize for neglecting the debate, but science and more pressingly my boss calls.

The very basis and motivation of Karl Popper’s work was an attempt to discern and define the difference between science and psuedoscience. Popper was the philosopher who conceived the idea of “falsification”, which led to the development of the scientific method. Prior to him, science was believed to generate truth from empirical evidence, and then proceed gradually to improve theories with new data.

Unfortunately, that sort of approach led to theories by the likes of Sigmund Freud, Alfred Adler, and Karl Marx — theories that didn’t quite set well with Popper because of how he noticed that practically anything under the sun could be cited as evidence in support of them. Proponents of two completely opposing theories could lift a story from a newspaper and go, “Aha! You see? Our theory is corroborated by this!” Bankruptcy of a business enterprise, for example, could be cited by Marxists as a symptom of capitalism’s failure, while Capitalists could call it an example of how the market merely seeks a relentless equilibrium.

In Conjectures and Refutations, Popper described in fascinating detail how he came upon the idea of falsification as the proper criterion for separating the hard and fast from the soft and flimsy. Attending a lecture by Einstein on his new Theory of Relativity, Popper was surprised when there began a free-for-all exchange of criticisms and answers. That’s when it dawned on him. What made Einstein’s theory scientific was that it made predictions that you could falsify.

From Popper:

And then:

After enumerating seven considerations, he stated his conclusion this way:

Poor Newton and Galileo. I guess they weren’t the fathers of the scientific method after all.:rolleyes:

edwino, I look forward to your replies when time allows. Meanwhile some additional comments for you to chew upon.

Knowledge develops in two main ways: Deduction - starting with a set of given axioms and following it logically out; and induction - believing something because of past exeriences.
A purely deductive system begins with sharp definitions. Few pure deductive systems exist, because most are based on axioms that owed their acceptance to induction.
Induction forms definitions out of experiences. We always reserve some doubt about inductions (however small) because inductions are never proven correct, they just haven’t been proven wrong. Deductions may then follow.

Lib brings up falsifiability, which causes me to recollect a past thread on a definition of science, http://boards.straightdope.com/sdmb/showthread.php?threadid=113755&highlight=falsifiability+science, in which Popper and other subjects were discussed.

There I offered (not for the first time) this definition of science

As to definitions in science, well it depends on what data you are trying to expain, on what the end is (as apos has implied). If we are interested in evolution and speciation, the a definition that focuses on reproductive capacities is required. If we are interested in investigating behavior or other aspects of physiologic function, then such a defintion is not always useful. Both are valid scientific inquiries, both can lay claim to use of “the short hand”, neither will come up with a term that is entirely sharp edged. And for each, the process of attempting to create a sharper defintion will help further understanding of what is important to study.

Libertarian hits the nail!

The sort of discussion that has been going on in this thread about how classification can be used to draw real lines in the sensory chaos that surrounds us (a.k.a reality) is an old horse in philosophy. The question begins with the classical seemingly pathetic question “what is a chair”. Despite edwino’s conviction to the contrary, the challenge of answering this taxonomically is dumbfounding. Yet, intuitively, we all know what a chair is! And whether you define a category extensionally (by who is included), or intentionally (by who should be included), you won’t get rid of the quandary. As Libertarian so eloquently points out, more important than the precision of our classification is whether the predicted results of a theory can be falsifiable.

Of course, unless we intuitively agree on the meaning of the nomenclature we use, we won’t be able to test a theory. This is simple because if the intentionality of our terms differ we have no idea what we’re agreeing about! Simply put: we have to speak the same language. And how semiotics (the science of symbols) works remains a fascinating mystery despite clever attempts in linguistics and cognitive science to resolve it.

Also, interestingly, as I mentioned in another thread, science is a descriptive discipline. It uses formal language to describe the world we live in. But science exists within the framework it attempts to describe. It is a subset of the world it paints. It can never be fully accurate. It can never be 100% the thing in-an-of-itself since it is a pragmatic modal we use to achieve certain goals. The example I use in the other thread is the road map. The more detailed the map is, the more exact the information and the better the map, right? No, since at some point the map becomes too cumbersome to use.

You would think in this digital age that virtual information is endless and hassle free. But even information in our digital age costs money and takes up space and time. An example is the U.S. military’s new super computer it will use to simulate nuclear explosions on a 1 to 1 scale. The computer hogs the electricity of a small town! It’s bloody expensive and on top of that has to be rebooted every 2 hours or so. So, summarized, if it takes 100,000 years to calculate the events of the next 10 minutes, what’s the point?

Race is a bad term because it intuitively means different things to different people. So testing whether a theory is falsifiable where the term is used without further explanation, is impossible. On the other hand, if you say “race is defined by skin color”, we can better intuit its intentionality. But even then, the problem isn’t entirely resolved (as DSeid and edwino’s discussion of “what is a human” indicates) since some people have skin tones that defy any classification. Nonetheless, we could easily classify them as “somewhere in between”. Splitting hairs in such a classification is totally counter productive and would bring us back to the “what is a chair” question.


NOTE: The above definition of race is only an example of how such terms need to be intuitively clarified in a scientific context. And how still, with such clarifications, any term defies an absolute taxonomy. It does not represent how I think race aught to be defined!

Gawd, I love that line!

But I do not see how falsifibility necessarily follows as prefered.

Let us use “race” as the case in point.

Let us first talk social science. Race is a concept of value. The issue is whether or not (and how) the percieved social group identification based on a set of visible superficial characteristics is associated with various measures, such as access to education, health care, career options and selections, etc. In this meaning it is a clear concept (although fuzzy at the edges) and can be used to create falsifiable hypotheses, with measurable outcomes.

Now let’s talk biology. The question here is whether or not sets of factors (?external appearence, haplotypes, specific SNPs, ancestral origins, performance on various measures, drug responses?) can be associated with each other, and if so how strongly. Does a division of populations into groups across one domain have meaningful correlation with factors across a different domain? One hypothesis could be that the set of superficial external characteristics that gets called “race” would meaningfully and independantly correlate with other domains. Such a hypothesis is falsifiable. Current evidence is that correlations between race and some other domains is measureable but poor. The hypothesis has not been falsified. Alternatively, looking at sets of various genetic markers independent of superficial characteristics, has been shown to have a significantly higher correlation with other domains. “Race” (the set of superficial external characteristics leading to sociologic clasification) is cast aside as a biologic concept, not because it is falsified, but because another set of identifying features fits the data better and fits it more comprehensively. At least for the dimensions of interest to those in biology.

The point of any model, BTW, is not to include every specific detail, but to provide an abstract representation of the more complex reality along some critical sets of dimensions determined on the basis of their utility.

I absolutely 100% agree! But this makes it sound as if deciding what is “critical” and “useful” is somehow obvious. It’s not. Since abstraction results in a loss of detail (unless we’re simply removing, or compressing, redundancies), it may result in erroneous data once we try to use it to accomplish real world objectives. No scientific model will ever be infinitely reliable.

You could argue that the Universe is a very redundant place and that therefore reducing its complexity a la Kolmogorov into a simple algorithm or equation is possible. But the fact is we can hardly even describe how a human walks into a hotel, registers at the front desk and goes to the right conference room where she/he is to hold a speech! And yet I am to believe there is a philosopher’s stone in the form of, say, a simple cellular automata? Get out of here!

So, as far as I go, abstraction implies “not quite exact”. That doesn’t mean that we can’t produce models so seemingly reliable that the imperfections don’t matter, the Newtonian model being one of them. Biological system, on the other hand, are so complex that to model them is a challenge. The number of factors are immense and to determine which are “associated with one another” is a daunting task. Hence models like those proposed by SOS (self organizing systems) where you focus on behavior as a whole rather than strict causal chains of specifically interlinked factors.

Then we move out beyond individual organisms into the social and ecological realm and we get even more speechless. Wow! Does this mean we can’t have falsifiable theories? Absolutely not. OK, since we’re there, let’s stick with race. Let’s say there is a theory that states that “blacks are more likely to commit crimes”. How do we measure this? We can’t, because there is no way to collect data on who actually commits a crime, only who gets prosecuted for it. So this theory is poorly formulated (not falsifiable) and should be dismissed. Not because it is or isn’t true but because we can’t determine whether is or isn’t! It’s bad science because it’s not falsifiable.

If, on the other hand, we reformulate the theory as “blacks are more likely to be prosecuted for crimes”, we can suddenly test the theory. Why? Because we can collect data on this, data that could be falsified! Falsification is a corner stone of modern science. Thank you Popper!

Unfortunately, I have to agree that saying that “race” is an unscientific term is like saying “bananas” is an unscientific term. Superficial human characteristics have obvious behavioral effects on people. The fact is that blacks are more likely to be prosecuted for certain crimes than whites are. Why is another issue. Again, any scientific theory attempting to explain why must be falsifiable. Otherwise it remains purely speculative. I know that biology and sociology yearn to be extraordinarily scientific in all respects, but where they can’t make predictions resulting in hard data, they remain squishy and soft and not quite the realm of science. If that wasn’t the fact, Freud would still be well and alive and smack in the midst of us. Fortunately, this isn’t the case. Rest in peace Freud, and please do not rise from the dead…

Re: the thread title,

does this mean that Fark’s dead-horse “In Soviet Russia, (noun) (verb) you!” cliche has crossed over?

I really didn’t want this aside to go without building on it! I am a big fan of a nonlinear approach to epistemology in general, and this ties in with another thread in which some of us have discussed the limits of the scientific method and of induction with someone who, well, just doesn’t seem to get it. The approach that you allude to, and that I admire, is in direct contradistinction to edwino’s wonderworld of linear causation and understanding from the neat box up. It assumes that a multiplicity of factors interact in very complex manners. It suggests that that knowledge acquisition itself follows the behaviors of other complex nonlinear systems.

To my way of thinking we need to start with an understanding of the basics of perception and of individual ontogeny. Perception is, at its very basis, an inductive process. From within the retina itself and up to cortical levels, we are percieving more than is there because we have been wired to percieve it so (evolution selected for those organisms which saw a complete square based on four corners alone, for example) and because of what we expect to be there. This happens at many levels of processing and many levels of cognition. Input triggers a best match which primes us to look for consistent data and supresses our looking for data which does not fit. If we fail to find that data, if data shows which doesn’t fit, then we begin a new search all over again. Find a new fit and continue the process. The entity of society acquires knowledge in a self-similar way (like other complex nonlinear systems, the pattern is self-similar at multiple levels of analysis). Both individuals and societies evolve their concepts as they go. The child first calls all animals “doggie” and later realizes that a dog is only one sort of animal, and that some that were included in that class are better called “bear” or “cat”, etc. In that case a child is learning to become more specific, more particular, with his/her use of the label, but some learning goes the other way. The child also learns that a particular object is a red chair. From experience with multiple objects he/she learns to generalize “red” and “chair” as abstract entities, that happen to intersect in this exemplar. (Some children have difficulty with this skill: autistics are, for example, hyperspecific, they learn “redchair” without generalizing the concepts to higher level categories. "Redchair is an enirely different item than “bluechair” and knowing one fails to help in learning the other.) A similar process also occurs as society gains knowledge. Science recognizes the abstract pattern that connects different particular examples and is then able to use that to better understand the abstract class.

The scientific concept is thus a fluid dynamic thing. Subject to contextual modification and to adjustment of specificity and abstractness based on experience.

Does any of this rambling make any sense?