At a training session about 6-7 years ago a study was presented dealing with the psychology of the average personal investor.
This was in the era when people were quitting their jobs to day trade full time, and boutiques were set up where these “investors” would open an account (frequently on margin), get a cubicle and a computer, and access to a monumental amount of information from which to analyze potential buys/sells. The boutiques would tout how much information was available to the “serious investor” and how that information was key to their success, much in the same way online brokerages are doing now (“Get NASDAQ Level II quotes! P/E! P/B! P/S! Treynor! Sharpe! Beta! Alpha! Std Dev! R squared!”).
The study I’m looking for focused on “how much information is useful, how much is too much”. It studied people who handicap horse races for a living (professional handicappers, the ones who set the odds for individual horses in races). They were given a field of horses, named #1, #2, #3, etc. With absolutely no other information they were asked to set the odds for each horse winning the race. Their results were no better than random. Then they were given one piece of information (e.g. whether or not each horse won its last race). Their accuracy improved. Then they were given another piece of information (e.g. how many races each had run in its career). With now two pieces of information on each horse their accuracy measurably improved again. Then they were given three pieces of information on each horse, then four, five, etc. etc. etc. (e.g. age of the horse, winning % over the last 10 races, winning % over the career of the horse, the horse’s sire and dam, trainer, jockey, “mudder” or not, record on turf vs. dirt, winning % per distance run, time since last race, how many times the same jockey had ridden the same horse, etc. etc. etc.).
The study revealed that, as expected, the handicappers became more and more accurate in their predictions as they had more and more information available to them. However, the advantage petered out after about six or seven discreet pieces of information. That is: with five pieces of information they were more accurate than with four, with six they were more accurate than with five, with seven they were marginally more accurate than with six, but after seven there was no more measurable increase in accuracy.
Agreed – picking stocks is hardly the same as picking horses. However, the conclusion the study drew was that, while certain key information is better than no information at all, after a certain point more information offered no benefit when trying to predict future performance of “less than mathematically precise” targets.
The purpose of this thread is not necessarily to debate the study (I doubt I’ve presented it clearly enough to warrant scholarly debate anyway), but to ask your help identifying the study. Who did it? Where? When? Cite? Where can I find/buy it? Others like it?
Thanks