Non-Experts and Science

In the “Quantum Mechanics & Mind” thread, the question was pondered by Expanso Mapcase:

I think the question can be even further generalized to why do non-experts in science think that they’ve come up with some amazing theory so readily and are so resistant to facts that they’re theory is very wrong?

My response was:

Half Man Half Wit responded with:

So here we are and we’ll see where this goes.

I definitely agree with Half Man Half Wit, in that it is such a large waste. I spent more time than I should have talking with MantraPhilter, although nowhere near as much or in detail as others, trying to not point out in the most basic ways he was incorrect but how he could go about correcting himself to no avail. But this isn’t about the time I, and others, wasted, I believe that people can learn and MantraPhilter was passionate about physics, and it is great to to have somebody else trying to expand human knowledge. Like other scientists on this board, I’m sure we all have dozens of stories from trying to explain our area of expertise to non-experts on the internet only to have them tell us how wrong we are and fire back with some rudimentary knowledge that itself was incorrect (or maybe that’s just me).

So how do we fix this? Well, I’ve been a proponent of improving critical thinking skills in school. This would help not just with this bizarre non-expert scientist effect but in other matters as well, but that doesn’t really help out with the current situation. How do you reach a non-expert that is convinced their right but doesn’t even have the tools to know they’re wrong to convince them to go and acquire the basic tools to know how wrong they are? I wish I knew.

In any case, as I said above, I’m very interested in what others have to say about this and thoughts on how the situation can be improved.

You’re up against the Dunning Kruger effect.

I just want to say “Good luck, we’re all counting on you”.

There is such a huge barrier to entry about such things. Even if access to published research was free (and that would help), it takes someone who is already an expert in a particular field to be able to read and understand the material enough to make further contribution. It’s hard work to get to this level and much easier just to post some shit to a message board.

This is very true. I remember my first paper in grad school, a literature review on smart weapons (I had just left the military). It was astonishing how much material there was and how deep the material went. It simply was not what I expected to find.

The school where I did my M.Sc actually had this very conversation. How can we get scientific literature to the layperson in a way that they will understand? They didn’t come up with anything, although they favored the idea of creating learning objects (LOs) that would help explain the critical elements of complex material at a layperson’s level. Graduates students at the university would build LOs as part of the course material. Such LOs would then be accessible freely to the public. There are LO repositories out there, but they don’t tend to be very useful from what I’ve seen. Even a couple that I’ve built during my M.Sc, when I look at them now, I think, what could somebody really get from this? Probably nothing unless you already knew everything I was going to tell you anyway.

Science is hard, not everyone can do it and relatively few can legitimately claim expertise. Pseudo-science is easy, everyone is an expert.

The problem seems to come down to the use, or rather the need for, analogies and the success of Einstein’s gedankenexperiments. The scientifically interested lay person learns about how things behave like waves or particles or intentional agents (genes) and then starts extending that hazzy understanding into logical constructs.

It’s a foundation built on sand backed up by mental models without empirical results. How could you ever be wrong? And since you consider yourself clever and since you don’t yet understand that like isn’t is people’s explanations make little sense.

I think that many of the people simply can’t grasp that they aren’t as intelligent or well educated as they think that they are, but for the ones that are really, really, insistent and build their whole lives around their pet idea, they seem much more likely to have some underlying mental problems–not mere arrogant but delusional or paranoid (especially the ones who think that there is a conspiracy from all scientists to suppress their brilliant new idea.) So very often it isn’t an educational issue, it is a medical issue.

I also think a lot of the newer science tv shows don’t help much. I understand the producers’ incentive is to make the content as accessible as possible, but it usually ends up with what seems to be camera-friendly scientists stating science facts and opinions without going in to any details that may be lost on the average viewer. Therefore, it looks easy, there is no math behind it, and it just has to be somewhat logically consistent and explainable with what we know already. I think a lot of people think this is all there is to doing science.

But once you learn just a little bit more and begin to look at actual research, you are humbled by how little you know compared to those in the field. I think a lot of pseudo-scientists open this door to this room and quickly close it and pretend that room doesn’t exist.

An important free booklet to download, from people with plenty of experience in debunking:

Implied, and mentioned in other places, is the very important item that people needs to be told about what is the current consensus or majority view that the experts have on an issue. It is such a key item that misinformers out there do see it as an item that **must **also be minimized or dismissed so as to continue with a larger deception.

IMHO one more layer of information needs to be used as it comes from people that are not experts, but that they do have plenty of experience in looking for simpler explanations and did a lot of the homework already. I’m referring here of groups such as Rationalwiki, Skeptical Inquirer, Skeptical Dictionary, Snopes or even Cecil. :slight_smile:

More than once, and in this message board too, I encountered people that claimed that if Snopes or popular scientific shows like NOVA on PBS did not talk about an issue that then it was not a true one; so not only did they hang on to the sources out there that fed them misinformation, they also depended on the seemingly “missing in action” “message” that they got from popular scientific or debunking groups when they looked like if they had no word about an issue.

As it happened, I showed to the subjects that indeed Snopes and NOVA did talk about those issues (the lack of a higher number of articles on a subject has to be blamed on some peculiar editorial choices), matching what most experts did report about the issue.

It was IMHO a good thing to see them acknowledge that a source they did agree that it should be looked at looked at the issue and did agree with most experts. Here one should not forget that if the person can not be reasoned with, that it is the other readers or viewers out there the ones whom we have to direct our message.

At least they stopped pushing the idea that serious popular sources of science and debunking disagree with the experts on those issues.

Some things are just flat complicated …

Nevermind why cyclones spin the direction they do … why do they spin at all? … I’ve watch a few Wikipedia pages change over the years and it’s fun to read though the talk pages and some of the screwball theories posted … but that’s a cloud sourcing issue and doesn’t affect BeepKillBeep’s idea of having “learning objects (LOs)” … and to be fair and honest, Wikipedia is slowly getting things right, seems that eventually someone comes along who’s not only knowledgeable in the subject but also skilled at teaching the subject … however, Wikipedia is an encyclopedia, not a textbook, so it has its roll in helping us understand, but it’s kinda sorta a bad place to learn things from scratch …

Some things just seem obvious and intuitive once we understand … we tend to forget the weeks and months we struggled to earn this understanding …

I would make a couple of points.

[li]Scientists tend to be overconfident of whatever theory is prevailing any particular time. This is something which has been true of scientists in previous generations and is true of scientists today. Of course, scientists in every generation acknowledge that the previous generations of scientists got things wrong but that this is no longer possible today, but there’s no objective reason to accept this. It’s part of human nature, apparently. The result of this is that scientists keep being proved wrong on details of their theories, at the least. This gives ammunition to lay people that if scientists could be wrong about some things then any scientists can be wrong about anything, and if scientists might be wrong about this particular thing they’re studying, then who’s to say that I can’t come up with the correct theory.[/li][li]Some of what BillKeepBill describes as “well-established because so much has been built on top of it that verifies this understanding on a daily basis” is really circular. Because much that gets “built on top of it” is interpreted in line with conventional scientific understanding simply because it’s what people assume is correct, not because it’s independent proof of that understanding. But this is not always obvious to people who been taught to understand these things in line with those theories - to these people it may seem obvious that they are to be understood in that manner. So someone who looks at it from a completely different perspective may seem more far off than they actually are.[/li][/ol]

That said, the likelihood of any particular lay expert being right while scientific consensus is wrong is extremely small. But the above factors give support to such intrepid lay experts as are willing to challenge the prevailing consensus.

I would just like to point out that, despite the Dunning-Kruger effect and the vast number of non-scientists attempting to send their contributions out into the world of science,* and despite the increasing specialization of science, it is still possible for non-experts to make real contributions to the sciences.

Such successful contributions tend, however, to be restricted in their scope. Trying to refute Einstein or Quantum Mechanics (as some people described in Martin Gardner’s still-useful [Fads and Fallacies] In the Name of Science did) are almost certainly wrong.

But Marjorie Rice, who read a 1975 *Scientific American * article on tessellations, went on to herself

1.) invent a notation for tessellations
2.) discovered four new tessellating pentagons
3.) discovered over sixty new pentagonal tessellations

She had a high school education and no notable background in math. She did all this in her spare time.

The case of Srinivasan Ramanujan is even more famous.

People say he was a genius, as if that explains it all. But most people probably wouldn’t call Marjorie Rice a genius. She was arguably obsessed, but that’s a good ingredient for making discoveries.

  • I know – I’ve been the recipient of papers and even books purporting to re-arrange our worldview

Look no further than our own debates forum to see how hard it is to do:
1 - unemotional, unbiased, reasoned exchange of info

2 - Update of mental model/world view when presented with new data
I can count on one hand the number of posters on this board that I read that I think genuinely read, accept info, process it, respond appropriately, revise their view, etc.
We (humans) suck at this process.

[quote=“Fotheringay-Phipps, post:11, topic:793866”]

I would make a couple of points.

[li]Scientists tend to be overconfident of whatever theory is prevailing any particular time. This is something which has been true of scientists in previous generations and is true of scientists today. Of course, scientists in every generation acknowledge that the previous generations of scientists got things wrong but that this is no longer possible today, but there’s no objective reason to accept this. It’s part of human nature, apparently. The result of this is that scientists keep being proved wrong on details of their theories, at the least. This gives ammunition to lay people that if scientists could be wrong about some things then any scientists can be wrong about anything, and if scientists might be wrong about this particular thing they’re studying, then who’s to say that I can’t come up with the correct theory.[/li][/QUOTE]

This is something many nonscientist tend to think, incorrectly, about science and scientists. Most scientists would like nothing more than to upend the orthodoxy and be the one person that presents the new theory. Most scientists are always questioning the prevailing wisdom. That’s what lies at the heart of science.

While true, it’s also simultaneously true that new correct models/ideas can take a long time to become accepted by the majority, even if the old idea didn’t really have good data to support it.

So, even though scientist A might be actively seeking to prove new idea #1, he/she may not have the mental bandwidth for an analysis on scientist B’s new idea #748 and thus assumes that what he/she was taught is still valid.

It’s not clear what you think I’ve said that you’re purporting to contradict.

As to your own comment, it’s worth noting this observation from Max Planck:

These are good points. A good scientist should be willing to change their view based on a new solid argument, and it is definitely true that scientists are human and therefore can become very invested in what they know and be slow, or completely unwilling, to budge. Typically, though the arguments that come from laypeople are not very well formulated and have serious errors. And that’s a shame because if there was a kernel of an idea it gets lost in the mess.

Now when I’m talking about building on top of that which is very well-established, I’m talking about very basic information. I find that the ideas that come from laypeople usually are trying to overturn the very basic elements because that’s what they know about. I.e. they don’t have the depth of knowledge to formulate an idea that is in the scientific gaps. These things are hard to overturn because they’re so well-established. Just to speak from personal experience for a moment, I get at least one person, usually an undergraduate, come to me every year with their big idea on artificial intelligence. 95% of the time it deals with either video games or strong AI. Of those, probably 90% concern artificial neural networks (ANNs). Of those, problem 99% have a profound misunderstanding of what an ANN is, what it does, and how it works. This isn’t a case of, oh maybe ANNs actually work they way they think. They don’t. They just don’t. If ANNs worked they way they think they work then hundreds or thousands of experiments per year would just fail. So that’s what I mean, there’s really no chance that tomorrow I’m going to read a paper saying “ANNs don’t work they way we think.” Could happen, but I’d bet against it every time regardless of the odds.

So yes, there’s definitely room for laypeople to contribute and that’s why I try not to simply dismiss a layperson. And yes, there’s certainly room for being careful not to get caught up in simply accepting what think we know as we know, but at the same time, there are certain principles in every scientific domain that are such a base truth, that it is very unlikely that they will completely wrong. But again, I think these are good points and I think you’re right that this type of thinking that results in an error does give ammunition to laypeople to say “But so-and-so was wrong so maybe you are too!” All the more reason for the scientific community to be careful in making definitive pronouncements (but usually it isn’t the scientist that makes such pronouncements but journalists cover the scientist).

Certainly anything that needs to actually work is harder to overturn. Because if the conventional understanding has produced inventions which actually work, then someone looking to challenge that has to come up with an alternative explanation for why the prior understanding produced workable machines.

I’m thinking more of fields which are not geared to producing workable machines. E.g. suppose an archeologist excavating some site decides that the site was probably for such-and-such use. Then he finds tools and implements, and interprets them in line with his theory, as being tools involved with such-and-such use. And so on, for any other features at the site. After a while, there’s an accumulated weight of all the various things associated with such-and-such use, and these can seem like additional evidence when in reality their interpretation is based on the initial hypothesis, and proving the hypothesis from the additional evidence is circular. (I’m seizing on an archeological example because I was once struck by this when reading up on the dispute over meaning of the Khirbet Qumran site, but it has much broader application.)

I understand what you’re getting at.

Writing as a non-expert…

Science, like learning foreign languages, is clearly not for everyone, and there is no getting around the fact that it is not essential to know things like the structure of an atom or how electricity is generated to live in the modern world. In a strictly pragmatic sense, if one doesn’t need such knowledge to pay rent/mortgage, do laundry, eat dinner, watch sports or chase girls/boys, what good is it?

That said, teaching the Scientific Method to develop/enhance powers of observation, deductive reasoning, creativity and the ability to view results/errors in proper perspective remains the best technique humans have yet devised to approach and solve problems, while establishing a baseline of facts (called “theories,” and ever subject to being proved wrong). In trying to correct others who do not have such training, there will inevitably be (too many) instances where the effort is folly.

For those cases where there is hope of enlightenment, I have found explicit correction to be less effective than the Socratic Method; asking questions of other people until they realize the weak point(s) of their argument. This process can be time-and-effort consuming and may require significant amounts of creativity and intuition - to say nothing of patience and fortitude - on the part of the questioner. Nevertheless, a successful result can be very gratifying for all parties.

As for “non-experts” offering “scientific expertise,” many such instances attest to the human inclinations for attention and feelings of self-worth (and sometimes greed) as outweighing the inconvenience of field-specific ignorance.

You will now have to excuse me as it is the time of day I have set aside for finishing up work on my perpetual motion machine…