How solid of a science is Psychology?

Oh yeah?

Hi there! I thought I was the only one. Nice to know I have company. :slight_smile:

Nice example you gave of research. Does it annoy you as much as it annoys me that people have such misconceptions about our field? We are either crazy, sleep with our patients, or sit around and dream up weird ass theories. Back when I was teaching, I would spend at least a week harping on the theme of psychology as a science. Thanks to everyone in this thread who is fighting ignorance.

Just wait til you get to know me before you say that.

Probably so.

Here’s a joke I came up with about the popular depiction of psychologists, at least in films, but it has a Sixth Sense spoiler, so I better tag it:

Just about the only decent psychologist shown in movies was Bruce Willis’ character in The Sixth Sense. So, according to Hollywood at least, the only good psychologist is a dead psychologist.

Thanks, I’ll be here all week. Try the veal.

I do not disagree; some sciences deal with far more objectively quantifiable topics than others. I would contend that it makes psychology no less of a science, and I am proud to have chosen the tough route than take up something easy, some discipline where everyone can agree. :wink:

I have a B.S. in Psychology from a program that emphasized research, not clinical practice.

IMHO, the key is to remember that what we are trying to understand is behavior, primarily human behavior. We don’t really have a better way to do that than applying the scientific method as best we can–through psychology research.

Reading a few psychology studies in academic journals will help you understand how operational definitions of things like “poor parenting” and “disruptive behavior” are used. For example, in the academic journal they may define poor parenting as absence from the home, acts of physical violence, reading to the child fewer than x hours per week, etc. The problem comes in when the mainstream media report the results and they just say “poor parenting.” It makes it sound like the psychology research just showed up and labeled the parents good or bad on a whim.

Y’all need to take a page from the physicists and standardize your units, then the news media could report exactly what you say, e.g.: “Researchers today released a study showing that 84% of children exposed to more then 3.2 kilocrosbys of bad parenting go on to cause 4.2 millidahmers of social disruption over the course of their lifetime.”

I meant the kind of objectified overarching model like the Standard Model in physics. A better way to put it would be that we don’t have a grammar of the psyche.

Depends on the type of drug and circumstances. Caffeine’s a drug. If mental faculties were necessarily disturbed, there’d be no point in giving amphetamines to fighter pilots or students before an exam.

True, we don’t have a grammar of the psyche, AFAIK. I hope I don’t appear to be arguing w/ you or nitpicking meaningless points, BTW. I’m just trying to get across the notion that psych is a science, even if it lacks the simplicity of four dimensions and easily measurable phenomena. I don’t disagree with what you’re saying, nor with what you’re saying about what the OP is saying; however, given the remarks by the practicing psychologists above and their reflections on the general understanding of psychology as a science, I don’t want my agreement with you to be misperceived.

It’s a genetic disorder, I think. I come from a family of precise speakers—probably because any opening is exploited for a joke.

I acknowledge the quality of that joke. It was top shelf. I’m not being sarcastic when I say that.

A big problem really is operationalizing (sp?) one’s criteria for doing research in the field. For example, I recently heard about work in so-called binge drinking, where “binge drinking” was five or more drinks in one sitting. By that measure, if one goes to the bar at 9:00 pm and has one drink per hour until 2:00 am, one has been binge drinking. Would any reasonable person really agree with that?

Binge drinking is a bad example because it’s not what psychologists study, but it’s a good illustration of the problem. To be honest, I think the problem is blown out of proportion in the layperson’s mind, however. Experiments in, say, cognitive psychology are very nuanced and conclusions rest not on one type of experiment, but on Occam’s Razor; converging lines of evidence are used to establish a conclusion, so that the conclusion is the best explanation of all the evidence. (I wish I could think up an example off the top of my head.)

Just to quibble, alcohol abuse, and substance abuse, are very much topics of study for psychologists. I know many psychologists with funding from NIAAA (National Institute on Alcohol Abuse and Alcoholism), and I myself have published on the topic of tobacco use.

Technically, I don’t believe mathematics is even a science.

In any event I am not sure you understand what “science” is. “Science” is the search for objective knowledge about the natural world. Something is or is not a “science” depending on HOW you search for answers - not whether or not you know all the answers. Science is defined by how you try to discover what you do not know.

A psychologist who pursues answers through the scientific method is a scientist, period, end of story. That the answers might be a little harder to pin down in that particualr discipline doesn’t really make it any less a science.

What I want to know is how they can apply the scientific method when their results derived from repeated experiments are bound to be inconsistent. I can’t even trust myself to give a correct response as to why I do certain things, why would a psychologist be able to explain it better when his results come from people just as unreliable as me?

Sure in some experiments where scientists just monitor brain activity we can trust people to be more reliable. It’s the research that comes from asking people questions about their state of mind that I don’t know whether to trust or not.

I’m a hardcore computer scientist/mathematician who has the disdain for liberal arts, and I believe wholeheartedly in psychology as a very solid science. One of my favorite classes as an undergrad WAS social psychology, a class that explained how individuals dealt with one another (something even us CS smoky-roomers will have to do at some point!), and all of the studies cited were conducted under the most rigorous application of the scientific method possible (and though some of them wouldn’t qualify now, namely the famous “shocking experiment”, psychologists carry out their work with the highest possible standards of ethics, something I greatly admire).

Well, they carry the experiments out numerous times…unlike a die roll, where every number will approach a 1/6th probability, eventually certain trends will eventually start to emerge, even across differences such as race, age, gender, etc. I consider that to be quite valid as far as the scientific method goes.

First, because everyone like you is unreliable in different ways. Individual differences are washed out with a suitable sample size.

Secondly, there are many measures that are quite reliable over time, even at the level of the individual. The Wechsler measures of IQ, for example, are quite likely to yield reasonably similar scores when administered to someone at different times.

There is no doubt that people are fairly poor reporters on many issues, especially when asked to recall information from some time ago. They do much better in the short term, so we can do longitudinal studies where we repeatedly ask a group of people about their functioning. You’ll get nice and robust growth curves this way.

Lakai, psych research is not what you think it is. I get the impression that you think it is either just observing people and drawing conclusions or asking people how and/or why they behave the way they do. It is neither.

As an example, let me describe a typical study in the area I used to study many years ago, which is called confirmation bias. Confirmation bias, ironically enough, is one of the specific phenomena within the area of cognitive errors or bias, which are ways that people are generally wrong about how they think, so asking them “do you do this?” would definitely not work.

Ok, confirmation bias. CB is the theory that although people may think they are unbiased in their search of and use of information, they aren’t. In fact, according to CB, most of us tend to a)look for information supporting our position and ignore information that does not support it and b) be more critical of information that does not support our position and less so of information that does.

So, to test this theory, do we ask people what they do? Nope. First of all, we might not realize we do this, second, we might not admit to it even if we do. Time to get sneaky. What I want to do in this study is see if people who have been told they failed at a task are more likely to want to see information discrediting the task, and vice versa for those who succeed. I am testing proposition A above.

A typical CB study might go something like this. First, plan on having a large group (let’s say 120 people) in your study. Then they are given a task (anything that seems ambiguous that they will believe that they did well or did poorly. For example, you could use a task where they have to decode information as quickly as possible). I then randomly assign my particpants to be told that they succeeded or failed. Random assignment is important because it ensures that my two groups are equal. Sure, individuals vary, but across averages, groups are equal. For example, I might have one participant who weighs 300 lbs. He is different, right? But if I have a large enough group, his 300 lbs will be equaled out by three people who are 50 pounds overweight in the other group. The average weights are very, very likely to be very, very similar. Same goes for intelligence, cussedness, anything else you can name. Therefore, I can assume that just like weight, intelligence, etc, a preference for certain types of articles would be equal in my groups, at least until I told them that they failed or succeeded.

Now, I take my participants one at a time and give them a fake newspaper article to read. Here’s the really tricky part. In the article, I cite fake sources that say that my task (I have given it an impressive name, like the Holton Heirarchy Test) is a good measure of something (say intelligence) or not. I then say that I have copies of the articles that they can have, but they can only have two. Which would you like to see, I ask, all innocence.

Then I see what happens. What I expect is that if the confirmation bias is correct, most people who “failed” will pick the anti-articles, most people who “succeeded” will pic the pro-articles. It won’t be perfect, of couse. Some who “failed” will pick one of each, some who “succeeded” will pick two anti-articles, etc. But if I look at group averages, I am likely to see that significantly (as determined by statistics) more anti-articles were picked by the “failure” group and more pro-articles were picked by the “success” group. Because the groups were equal before I gave them their failure or success, the difference between them must be due to that. Voila.

I seriously dumbed down this study (there would be tons more detail to make sure everyone was treated the same, didn’t know others’ responses, etc) but I wanted to get the flavor across.

If anyone is still reading, the point is that we use the scientific method. Because our participants are thinking humans (even the college students!) things can get difficult, but we don’t just ask people what they do and call that research.

Lakai ought to follow CognitiveDaily for a few days.