The existence of scientific entities

Do the entities in scientists’ explanatory models and theories (such as the Higgs’ Bosons or selfish genes…) actually exist or are they primarily useful inventions for predicting and controlling the natural world?

I would love to have some arguments suggesting why they DO exist, but also some as to why they DO NOT actually exist.

Thanks!

Yes, almost all of them, mostly, others are mathematical conventions until found or redefined with a better mathematical model.

Define “existence,” Grasshopper.

Philosophers call this the question of scientific realism. Sections 2 and 3 of the linked article run through a bunch of arguments for and against scientific realism, along with some counterarguments as well.

Beyond that, we’re getting into GD territory.

This was a debate in the 19th century about atoms. Atomic theory accounted for more and more observable traits of the physical world, but to the very end of the century some holdouts doubted that they were “real”- that there really were tiny little spheres, rather than simply that the world acted that way. The clincher was a mathematical proof by Einstein that random thermal jiggling of discrete atoms would account exactly for the observed distribution of particles in a fluid by Brownian motion. Ironically, just as the reality of atoms was established, subatomic physics and quantum theory came along and established a strongly counter-intuitive view of reality. The best we can currently say is that quantum entities are as “real” as anything we can know.

All models of reality are wrong. Some are less wrong than others. Some can make useful predictions about reality.

As a working biologist-in-training, I assume that there is some underlying physical truth for the phenomenon I study. I build conceptual models of how organisms work in terms of cells, genes, and proteins. For example, we make a lot of abstract cartoons that illustrate our understanding of gene and protein interactions. These models may reflect our best current understanding of some process, but they are also oversimplifications and certainly wrong in some way. Even if the models are wrong though, we can use them to make predictions, design new experiments, and incorporate the data into a new and improved model.

The concept of a “selfish gene” is a model sort of like that. It’s an abstract concept of how genes behave. It’s also an example of lies-to-children: “A lie-to-children is a statement that is false, but which nevertheless leads the child’s mind towards a more accurate explanation, one that the child will only be able to appreciate if it has been primed with the lie.” As such, the selfish gene concept is a pretty useful way of learning about genetics, and there are certainly examples of genes that do act in a straightforward selfish manner. But most genes are much more complicated than that, and there is no Platonic True Selfish Gene.

How do I know you are real? Let us say you are standing in front of me and I am looking at you. What actually happens is that my brain perceives you as standing there. This supposedly means that certain electromagnetic waves impinged on my retina and triggered the optic nerves that signaled the brain and entered my consciousness, whatever that might be. Pretty far-fetched wouldn’t you say? On the other hand, things seem to work well when I pretend you are “really” standing in front of me. So I adopt the position that things I see are real. Well, several more levels of inference down, I come to the Higgs boson. And choose to believe it is real. But that’s just me.

As mentioned above, the models explain behavior. There may or may not be something very much like the models that exist in physical reality. As we learn more about any subject, we learn about flaws in earlier models. Sometimes those flaws cause us to completely give up the earlier model (e.g., we gave up Newton’s fixed reference framework as a model for space, in favor of Einsteins space-time continuum). Sometimes we revise the model, as when we exchanged the rather planetary-looking Rutherford-Bohr model of the atom for a series of successive refinements leading to the modern model, based mostly on the work of Pauli, Heisenberg, and Schroedinger, which has probability-based clouds of electrons in different wave-based “orbits”.

The link that MikeS provides covers the argument too well to need further discussion, but how many are going to read it all the way through?

As for the OP, define “exist.” I mean really and thoroughly and absolutely rigorously define “exist.” Can you do it? Of course not. Which is great for argument but you shouldn’t expect anybody to give a satisfactory answer about something nobody can define and no two agree agree upon.

This is important - a model or theory that can consistently do this tends to be regarded as having a strong connection to reality.

Here’s a little thought experiment to help illustrate.

Let’s say we make a vast supercomputer and set it up with a program that creates the conditions for intelligent life in a model world of some kind. Imagine them figuring out their world. I bet they’d end up figuring out things like the underlying numeric formats (e.g., how many bit integers, IEEE floating point formats, to the extent that these were reflected in the modeled world). They’d figure out a lot of the properties we’d coded into their simulated world, maybe even some we hadn’t thought of ourselves.

But they’d probably never get anywhere near any kind of realization of the electronic substrate on which their world ran. From our point of view, these things are clearly real and the foundation of their reality. But from their point of view, as long as none of the characteristics of the substrate “leaks” into observable phenomena, it would be silly for them to speculate.

So, what’s the reality? We can never know, we can only know what we can learn. There may actually be something real underneath or behind the scenes that’s radically different in some way from our models, but if it has no impact on what we can observe, it might as well not exist for all that we can say.

I have an epistemology, not an ontology.

Exactly!

I think this is a good thought experiment, but I’m not sure about the conclusion. Basically, the thing is that computers (sufficiently capable ones) are universal, i.e. each computer can emulate every other computer – such that in principle, the simulation can be run on anything from a cellular automaton to a modern-day supercomputer, without any observable difference ‘from the inside’. So, in principle, any theory supporting universal computers is a theory of everything, to the extent that it can describe a computer running a simulation of everything. But still, there needs to exist a mapping from the states of the computer to the elements of the theory: Maxwellian electrodynamics certainly suffices to describe a universal computer, but it’s not therefore a theory of everything: at a minimum, you’d need to specify the computer and its program, which would translate to giving a certain electromagnetic field configuration, as additional data. And these additional data then is what correlates to the physically real entities within the simulation, and such a correlation exists for every possible simulation and physical implementation.

So, while it is possible that we’re within a simulation run on a computer comprised of electromagnetic fields, or cogs and wheels, or whatever, that fact is not what we look for in an explanation of the way the world is, in a theory of everything, rather, what theoretical entities within a given framework are necessary to give an as complete as possible picture. These will not be unique: it often happens that there are ‘dual’ descriptions of physical phenomena, such as, for instance, within the AdS/CFT correspondence (what that is is not important right now), one salient feature of which is that there are two pictures describing the same physics that differ in something as basic as the number of physical dimensions; nonetheless, observable phenomena are accounted for equally well by both accounts. This of course means nothing else than that the difference in dimensions is accounted for some other data within the other framework.

Gah, I shouldn’t write things like this while watching TV. I guess the point is that you can model salient features of the world using different methods, just like you can model, say, the aerodynamic properties of a car using clay, or lego bricks, or carve a piece of wood, or styrofoam, etc.; but what is being modeled is a property independent of the implementation of the model, a form in some general sense. Similarly, differently implemented simulations nevertheless model the same properties, no matter the physical implementation or precise program underlying them.

And conversely, any model or theory that cannot be used in the service of predicting is pretty much useless and worthy of discard. You would never change your mind if new ideas failed to predict events.

What’s the difference?

The fundamental fabric of reality is like looking through a glass darkly. A very fuzzy, dark glass.

There are strange dualities found to be complementary parts of every seemingly discreet thing we can point to.

Time is space, and vice versa. Mass is energy, and vice versa. Electricity is magnetism, etc. It would seem logical to assume a far more strange, Möbius strip-like reality with just a few, if not merely one or two kinds of essences of reality manifesting themselves in their myriad ways to us; an almost impossible tangle of nature itself:

  • …time which is dilated by space when bent by the presence of gravity that’s manifested by mass, which is really pure energy that coalesces as matter that moves through time…*

I feel this is just the tip of the proverbial iceberg.

Exactly.

In the thought experiment, the artifical inhabitants could deduce computation, but they could not deduce whether they were running on a Mac or an Intel. Yet the reality, to the experimentors, is that they happen to be running on a Mac.

Is that important? No … but it is REAL. Would the results be the same on an Intel? Yes. Who cares? Nobody. That’s the point. Reality doesn’t really matter as much as one might think. What matters is what’s essential about reality. Oh heck, maybe I should be watching TV.

Like, the laws of nature invert, just outside the orbit of Mars… :smiley:

I won’t presume to know exactly what you mean, but…

In general, when people ask this, they’re probably wondering if there’s some sort of tiny particle called the Higgs Boson that, with a sufficiently large omniscope, we’d be able to see. The answer, as far as I understand particle physics and according to my own prejudices therein, is “not in the way a billiard ball exists”. Genes probably have much more of a concrete existence. DNA definitely does.

A scientist who sits around thinking for a while will probably come to the conclusion that most of “it” outside her own mind is subjective, individual, and “illusory” in some important ways. On the other hand, she’ll probably conclude that this isn’t that important. The models have wonderful predictive qualities. Either they predict what we observe, or they don’t. The really exciting part is when they don’t, because that means that we’ve observed something that the model doesn’t predict, and therefore we’ve added to our perception of the universe.

Two concrete-like examples:
In quantum chemistry, there are a variety of ways of predicting the interaction of atoms using quantum-mechanical models. Some models do a magnificent job of predicting the “observables” (such as absorption spectra, reactions, etc.). Unfortunately, when one looks at these models, they contain very little physical sense. They’re full of tiny tweaks and adjustments that don’t build any sort of “picture” of an atom.
Conversely, there are models that predict the shapes of atoms and their electron orbitals, etc. These are fun to play with, but mathematically they’re junk; they fail to predict observable phenomena.
Nobody has yet found a model that can do both. :dubious:

Or, the joke about scientists and engineers:
A professor once had a class that was about 1/2 scientists and 1/2 engineers. He tacked a $100 bill to the wall and then said “OK, the first one to get to the wall gets the $100. The rule is, though, that you have to cross the room by “halves”. You have to stop halfway across, then halfway across the remaining distance, and so forth.”
Well, the scientists, who thought they understood exponential decay, passed on this. They reasoned that nobody would ever get to the wall, because the distance from the person to the wall would get shorter and shorter, but never be zero.
The engineers, on the other hand, immediately went for the $100, because they realized that while you’d never get to the other side, you’d get close enough! (The classic version of this joke is sexist, which is why I didn’t use it).

A reasonable scientist would say that despite all the “barriers” between objective reality and “ourselves”, we can get close enough to do interesting things.

Actually, selfish genes are a pretty lousy way to learn genetics, and the idea was conceived at a time before much of anything was really known about genes as entities. The concept of the selfish gene was intended by Dawkins as a model to explain how altruism (Dawkins was an ethologist by training, not a geneticist) might arise through natural selection, for which it does help…sort of.