In normal human interactions, people pick up on patterns. Let’s say that I go to the same coffee place every day, and every day I buy a bran muffin and a medium coffee, milk, no sugar, from the same employee. Within a few days, the employee will probably see me and ask, “the usual?” And I’ll say yes. Within a few days after that, she’ll probably stop asking.
My computer is not so smart. At least, not yet.
For example, every time I open a file, regardless of the program, I have to tell the the computer to list the files in reverse chronological order. Every. Single. Time. Couldn’t a future version of Windows recognize that little pattern and start doing things that way automatically? Without my having to tell it?
Right now, lots of programs sort of let you do this the hard way by setting up an incredibly complex set of “preferences” or “options” or “autotext commands” or somesuch, which first you have to find, and then decipher. In word processing (my most common task as a writer), I’d rather be able to type along and have Word remember what I do once I’ve done it a few times.
What sort of advances in programming would it take to get my ‘puter to do this? Or, if it’s already technically feasible, is it simply a matter of cost and market preference?
Computers are very, very bad at pattern recognition. Pattern recognition requires a certain amount of smartness that is not easily expressable in algorithmic form. We humans are a little too good at it. That’s why we see animals in clouds, have hindsight bias and see patterns in randomness that probably aren’t really there.
Recognizing patterns in human behavior is an even more difficult problem. Consider that to do what you suggest, would require the computer to pay attention to every single action you take (probably on the order of thousands per day) and examine the data over a period of some time to find the common threads. Then, when it automatically sets reverse-chronological order in the wrong place, you get annoyed at the computer’s obnoxious, inconsistent behavior.
It’s easier right now to let the user tweak preferences until they arrive at a permutation that suits them. AI is making inroads into personal computer use, however. For example, the more advanced spam filters now look for patterns in what the human user marks as spam, and do a pretty good job of recognizing spam after a little “training.”
Research into neural network simulators is leading to more concrete ways to manufacture abstract pattern recognition and prediction, but little of it is at a stage where it could be made useful on a personal computer.
As friedo said, humans do pattern recognition much better than computers do. There has been a lot of research into “neural networks” where individual units that function similarly to the neurons in our brain are interconnected. The problem with these is that with relatively few interconnections, they are easy to figure out and analyze. Unfortunately, they don’t do anything interesting at this level either. Once you add more and more of these artificial “neurons” together, they start to exhibit some really interesting behavior (like measurable signals that look an awful lot like brain waves), but at this point the interconnections are so complex that we can’t figure out how they all work.
Still, even though we don’t understand them, we can create huge circuits and train them to do all sorts of interesting things. Again, this has problems. The first problem is that training the circuits takes an awfully long time. The second problem is that we don’t understand how they work, and sometimes they don’t work the way we planned. For example, the military folks hooked up a neural network and trained it to recognize pictures of tanks, by simply showing it a lot of pictures that had tanks and a lot of pictures that didn’t. They thought, hey, cool, we’ve got our spiffy new tank analyzer, but it turned out that all the pictures that they had of tanks were slightly darker, and that’s what the circuit ended up training itself on. So, after all this work, their spiffy tank detector only detected the overall brightness of the picture.
It may be possible for a computer program to mimic learning to some degree in the near future, but true learning done by a machine is a long way off in the future.
I dunno what sort of programming NJStar (allows you to read and write Chinese/Japanese/Korean) has, but it does have a rather limited pattern recognition if I turn that option on. Now, all I have to do is type the first two “letters” of a phrase I use often and it generally pops up as either the first choice or one of the top three.
As for doing repetitive actions, aren’t those what Macros are for?
And IIRC you can tell the computer to sort in descending order when it’s displaying the files, and keep it that way unless you want to do something different.