About Applied Science and the Scientific Method

I am a researcher in a particular field of engineering. Academic engineering research is mostly applied science (ie, the academic researchers do a mix of working out better ways of “doing engineering”, and actually “doing engineering” but on the leading edge of what industry is prepared to experiment with).

A few of us have been concerned that the research in our field could be improved, so we have been trying to set out what would constitute “better” applied research.

The model we have come up with is something like this:

The way industry conducts engineering is based on prescriptive (recipe) knowledge. Each company unit or project follows processes that they believe are the best way of reaching the end results that they want to achieve.

Researchers seek ultimately to improve the current industry practices. Ultimately, any complete research question is of the form “How should I do it?” for some scope of “I”, “do” and “it”.

To answer any complete research question, there are four types of knowledge that must be generated:

Descriptive knowledge talks about how the world currently is. (Sometimes called problem identification).

Prescriptive knowledge provides guidance for action.

Philosophical knowledge identifies the context, including what we consider “good”

Quality knowledge applies to the prescriptive knowledge, and includes claims about what is good about the prescriptive knowledge, and evaluation of those claims.

Now a particular research project doesn’t need to generate knowledge in all four of these areas (and could generate knowledge in one area but for multiple questions), but without a full set of knowledge for a particular question, the question is not sufficiently answered.

All of this comes with the caveats that any piece of knowledge can be made better, more certain, or replaced through more research, possibly resulting in a new answer to the original question.

The interesting question I’m opening for debate is this: Where does the scientific method fit in to applied science?

My argument would be that quality knowledge is actually a special subset of descriptive knowledge, and that the scientific method in the Popper form can be applied only to descriptive knowledge. I would suggest though that it is not the only method that can be applied to descriptive knowledge, and is not always the most suitable method.

[Rules: Let’s keep this thread to applied science, and stick to talking about knowledge, research and evidence. I don’t mind a drift from the precise issue for debate so long as we stay within those bounds, and don’t drift to questions of non-applied science or non-science. Let’s also stay off the topic of whether applied science should rightly be called “science”.]

What do you mean by “good” in your definition of quality knowledge? Does this mean something like defect levels, or is more philosophical?

Part of the philosophical knowledge is determining what is “good” in a particular case. For example, one “good” that could be achieved by a new software development method might be that it results in cheaper development costs. Another “good” might be that it results in fewer defects. Yet another “good” might be that it is in greater harmony with the universe :).

So philosophical knowledge tells you what claims are useful in a particular context. Quality knowledge tells you what claims are made and supported for a particular piece of prescriptive knowledge.

Okay - I’ll respond later. Now I actually have to go back doing this very stuff.

I’m not at all scientifically minded, but “Quality Knowledge” and “Prescriptive Knowledge” strike me as the same thing.

Wouldn’t it simply be descriptive knowledge? (And I do see it that way.)

As you said, the big difference between applied science and regular science is that applied science is often applied to human invention, not nature. In my graduate research I worked in computer architecture and compute languages, both things which did not exist 50 years before. We study what is out there, but we also create new things which will be studied later.

So, in traditional science they make a hypothesis about the natural world, devise an experiment to test the hypothesis, and see if it can be falsified. Watson and Crick studied the DNA molecule, and tested a hypothesis about its structure. If they were applied scientists, they would not only study its structure but propose a better DNA molecule which would have less junk, better coding, and cause fewer mutations. That is the kind of paper I see all the time. You don’t compare your work with nature, but with others who have gone before, to try to do better based on some criteria of goodness or efficiency. Very few papers I’ve seen (and I’ve seen a lot) have structure similar to the biology papers my wife studied.

I hope this is somewhere close to what you wanted to discuss.

Yes, this is the sort of thing I was getting at. I guess I was testing whether other people see it the same way I do, or if there would be an instant uproar “that’s not scientific!”.

That’s what we learn in elementary school, but this chemist has found that most science is, “look! shiny!” with only a few minutes now and then devoted to writing down that “these observations are consistent with [scenario].”

I’ve never seen a reviewer criticize a paper for not being scientific - but the stuff I run has a lot of industry participation, and many of us are more on the “what works” side of things.

Sure, and applied science collects a lot of data also. But I’d say experimental physicists are probably closer the the applied science side as opposed to the theoretical physicists - though I haven’t read papers by either, so this is just a guess. I don’t think data gathering is an area where these things can be distinguished.

So true.