…if you ignore the MBTI Step II analysis.
If you want to explain Step II I’m not getting in your way, but his answer is equally false for Step II.
In one of my classes in grad school we all took some short version of the MBTI. The purpose of this exercise was mostly to learn about something we might encounter when out in the professional world, not to evaluate our own personalities, but our instructor did ask us how many people came out “E”. Out of 40-odd people in the class, there were only 4 "E"s.
Did I mention that this was a graduate program in library science?
Apparently our “I”/“E” split was pretty typical of library students. When I mentioned this to other people in the program who were not in that same class, they always asked “Who were the four extroverts?” No one was surprised when I named them. It was pretty clear that they really were much more extroverted than the rest of us. Very few of my classmates were the hiding behind a chart type, and most did not seem introverted or withdrawn at all when around people they knew, but the four "E"s definitely stood out from the rest of us. One of the four was even nicknamed “the social director” because she was usually the one who planned parties and nights out for our social circle.
A given score on a Myers-Briggs test tells you a score on a Myers-Briggs test. That is all. No more, no less. To claim that this score corresponds to anything at all in the real world is a matter of faith. If you want to believe it, it’s true. Otherwise, it isn’t… or… putting it another way, there’s no good reason or good evidence to back up the supposed correspondence.
A lot of people want ‘psychometric’ tests such as MB to yield helpful information about people (such as misguided fools running HR departments who want some help recruiting the ‘right’ people). And lots of other people are quite prepared to cater to this market and to charge money either devising tests or administering them (“hey, if there’s money being offered for any old rope that sounds right, we may as well grab some of it”). And so the money sloshes around the system, with both buyers and suppliers untroubled by the fact that the ‘science’ involved is exactly on a par with astrology and forms just as good a basis (i.e. none whatsoever) for understanding personality types or making recruitment decisions (which, as some have already pointed out, MB wasn’t designed to be used for anyway).
One friend of mine used to run one of the largest organisations supplying and administering psychometric tests here in the UK. I got to know him some years after he had pulled out of that particular market. He cheerfully admitted that he had no idea whether psychometric testing was an accurate guide to anything at all.
Another friend of mine currently works for one of the largest and most reputable companies that deals in psychometric testing related to the workplace, and even lectures on how to use and administer such tests. I once invited him, in a perfectly friendly way, to send me any evidence he could find, in the academic literature, to verify that the use of such testing produced results any better than chance (e.g. better staff retention levels measured over a significant period of time, or better performance, or productivity, or job ‘fit’ and satisfaction, measured by some pertinent criterion). He was intrigued by the challenge and said he would indeed look for such evidence. He even had the funding and resources to allocate some of the searching to a junior colleague. One year later, I am still waiting. None found so far.
Interested readers may like to look at a book called ‘How to succeed at psychometric tests’ by David Cohen. Cohen is a qualified psychologist and this book points out that, in essence, a score on a psychometric test only tells you how good you are at passing that psychometric test. The fact that employers oftne regard such a score as ‘meaningful’ in terms or recruitment or promotion is just absurd, but such is life.
Some of the readers of these boards know that I have some expertise in the field of what is known as ‘cold reading’. I have yet to see any so-called ‘results’ derived from MB or any other psychometric testing that can, in terms of accuracy, relevance or utility, be successfully differentiated by any objective test from complete nonsense I make up according to basic cold reading principles.
Just for the record, there’s more to cold reading than the Forer effect, but the Forer effect is certainly one part of the mix.
The accurate answer is that psychometric testing (MB or any other form) is not a measure of anything and certainly not a science… it’s a market. Some people want ‘personality analysis magic’ and others are prepared to put on white coats and pretend they can provide it… for a price.
There’s no “passing” of a MBTI test, no type is “better” than any others.:dubious:
If employers aren’t using these tests to make decisions about what to do with their employees, then why are they making their employees take them?
What’s a “spuedo”? A speedo in another language?
Anyway…
I would like to recommend a recent book on this subject, The Cult of Personality, by Annie Murphy Paul, Simon & Schuster, 2004. It is subtitled “How personality tests are leading us to miseduate our children, mismanage our companies, and misunderstand ourselves.”
She covers the range of mainstream concepts like MMP to pop concepts like MBTI, with history and application info for all. It’s a good read.
Well, it was a teambuilding exercise when I took them.
i’d say MBI is pseudoscience. no reliable testing to prove its classifications.
Has anyone ever challenged its use in court?
It’s Spanish rap for “Yes, I can.”
A question to those more familiar with actually carrying out the testing, is the test sensitive to dishonesty? I.e. if you were trying to appear as a different type than you actually are, and thus answering questions with deliberate lies, would such a thing be detected? For instance, by evaluating established correlations between answers to certain questions, or answering redundant questions (questions that essentially ask the same thing – I presume those exist?) in a contrary way, because the subject might not catch the redundancy, i.e. not realise he is actually contradicting himself in his answers.
Because it seems to me that when answering the questions honestly, the test is more or less trivial, i.e. it doesn’t actually reveal any information (otherwise you wouldn’t, when presented with the results, recognize your own personality traits – such recognition implies that you knew of them beforehand, and thus, the test didn’t tell you anything new, perhaps combined with a little Forer effect embellishment), however, if it could be used to detect dishonesty, it can be used to reveal information about a non-cooperative subject, if only by way of negative, which would be a legitimate use.
Those of you waving the pseudoscience flag need to take it to GD.
You consider physics a “hard” science but the fact of the matter is that psychology is the more difficult of the two sciences in many respects.
Just because a science is difficult does not mean that you can’t apply statistics to it and do what you can. Just because the descriptions are fuzzy does not make it less of a science - it’s the nature of the human brain.
If you take fault with the descriptions in the MBTI you take fault with much of psychology. I’d gladly have it up with you but this is not the correct forum to make brazen statements about how psychometrics is patently false a priori.
Psychology is a perfectly valid science, but physics, chemistry and biology have hundreds of years on psychology. It wasn’t three hundered years ago that people believed in flagistan. I take fault not with performing these tests, but with them being used by businesses as if anyone really knows what they mean. I take issue with anyone that develops a firm conclusion based on these types of tests. These tests are strictly academic at this point.
I’ve taken it many times in many circumstances. I often score in several different categories depending upon my mood, or on how I interpret “borderline” questions at the particular time.
That the test allows me to score so inconsistently, and does not have an IUMW (Inconsistent Uncategorizable Moody Wanker) category as a failover to place me in, means it self demonstrates its own invalidity.
Incogito ergo putz, or something like that.
'Sup.
I’ve taken the Myers-Briggs twice, some 10-12 years apart, and been a rock-solid INTJ both times.
I think you’ll find that, though it’s the rarest type in the general population, boards such as this tend to have a very high concentration of us INTJs.
Hmm methinks I smell a poll in the works.
Test advocates claim they include questions designed to trap those who are deliberately attempting to fool the quiz or skew the results. That doesn’t mean that they can’t be detected or thwarted.
The ‘hard’ in ‘hard science’ doesn’t mean ‘not easy’, it means ‘not soft’. Saying psychology isn’t a hard science isn’t a claim that anyone can do it, it that the results from psychology experiments tend to be rather wobbly compared to those conducted in physics or chemistry labs: much less conclusive, much more subject to individual interpretation, and with far more variables left unaccounted for.
That was the joke.