mental operating systems / user interface

Is there such thing as a mental operating system? This is part GQ, if the other parts are overwhelming, feel free to move. I had a couple of related thoughts in my head recently that all seem to be related: I was thinking about a good deal of the general thinking I do is visual - I imagine playing out different scenarios when making decisions, or I’ll mentally picture something that I just enjoy thinking about. If what I’m imagining isn’t too intensive, I could easily pay attention to it and what I’m actually looking at in reality at the same time, just as one could pay attention to a “picture in picture” function on a TV or subtitles in a movie. That lead me to think of things like the movie “Dreamcatcher” where the one character envisions his memories visually as files in a building. So I’m wondering:

  1. Would it be possible, if I imagined some sort of mental picture long enough projected upon my real visual input, to have it eventually just become effortless? For example, suppose I spent a day staring at a digital clock, and then the next week or so trying to imagine the clock tick off every second, checking every so often to make sure my imagined clock was keeping the correct time. Eventually, would the clock become second nature such that I didn’t have to think about the clock unless I wanted to - it was still “visible” in the periphery of my imagination but not “in focus” until I was curious about what time it was and then I could just let my imaginative focus wander over to that part of my mental “screen” and see the actual time? What else could one do with their mental/visual/imaginative “user interface”? Not just when dreaming/meditating/visualizing internally, but more so when doing ordinary everyday things and having some sort of useful projection on top of normal reality.

  2. Besides projecting a visual “user interface”, what can one do about one’s deeper mental “operating system”? Are there allegorical mental techniques similar to computer functions? Is there a way to mentally “compress”, “file”, “error check”, etc data/memory such that we have easier, better, more comprehensive recall.

  3. Is there anyone famous for doing these sort of things? What are the limits of the human mental “user interface” and “operating system”? Have there been any studies on this topic? Are there any noteworthy cases? Might these have broad differences among cultural, racial, gender, age, or other groups?

  4. Do Dopers have any interesting anecdotes about the way their own mental operating system works, and if they have any special kind of “user interface” or useful “algorithms”/“programs”/“scripts”/etc? Can we think of other recent developments in computer/information technology that we could translate into the mental sphere? Are there other important allegories, such as memes might be better allegorically as mental genes than mental software? Or that religions/philosophical viewpoints are types of mental OSs? Any other insights?

  5. What would you like to be able to do in your own user interface? How would you change your own operating system if you could?

The short answer to all of these, unless I’m missing something, is No.

Anything else is the wishful thinking of technoutopians who like to pretend that their brains are computers, in spite of the brain being about a billion times more advanced than any computer. Why would you want to waste time and money emulating functions that your brain already does, and more, subconsciously?

I think common sense dictates the short answer must be either “yes, but only so far” or “yes, surprisingly much” or “yes but we don’t know how much”. In terms of user interface, we know there’s things like visualizing information to increase memory retention. How far we can expand on that is an open question, but certainly it exists in at least a minimal form. In the case of the “mental clock”, there’s no question that you could visualize a clock and keep it accurate for at least a minute or so. And there’s been other threads suggesting that people can train themselves subconsciously to be aware of the time to some degree of accuracy. So it’s not a stretch to wonder if this could be made more accurate, and conscious.

Well one of my questions was - which computer analogies are useful and which are not comparable, as well as if other analogies were more useful, such as the memetic/genetic one, so it’s not a matter of pretending that the brain is a direct analogue of modern computers. Is the brain really “more advanced”. Advancement assumes design. It’s clearly more powerful (at this time, in the generic population) at certain tasks such as visual recognition and language, and less powerful for certain things such as pure calculation and accurate data retrieval.

And, obviously, there’s less control, accuracy, and usefulness from subconscious processes. Why is there a huge market for PDAs? Why is there research into putting a user interface into eyeglasses? Why do people meditate to lower reaction to stress and moderate other body functions? A lot of skills people used to have, like memorizing and retelling oral tradition has waned due to modern recording devices. Clearly there’s potential for the brain to do some interesting things. But in the past, they didn’t have some of the concepts created by modern tech - to try and emulate them mentally. They also didn’t have to cope with information to the same degree. Surely the confluence of the new mental viewpoints from new tech and culture changes, new research into both mental techniques and the biology of the brain, and the crossbreeding of different ares of inquiry might bring us some insights in the area of mental user interface and systems operation?

I think all that’s happening here is that you’re using an analogy, then treating it as literal. There are lots of interesting things about the mind and brain, but why would they have to resemble an electronic computer?

That’s just you (and a bunch of others). But it’s not the only way of thinking.

There are other groups of people whose primary ‘thinking’ method is auditory – they imagine hearing a dialog inside their head. And some groups where it is kinesthetic – ‘going through the motions’ to think about it. People in athletics, dance, acting often seem ‘think’ this way. (Or they use combinations of these. Many people tend to pace when they are thinking hard.)

Seems like any of the senses can be the primary one for ‘thinking’. I once had a friend who is an accomplished chef state that he thinks about new dishes by imagining how they will smell!

The sizes of these groups vary. I believe that the ‘visual’ people are the largest group, followed by the ‘auditory’ group. I once read that you could tell which mode a person customarily thinks in by how they indicate agreement: a visual thinker will say “I see what you mean”; an auditory thinker will say “I hear what you’re saying”. I don’t know how accurate that is.

The brain is, for all intents and purposes, no more than a very complex self-regulating computer. However, it is not a digital logic computer in either construction or algorithmic operation, nor does it function in any way resembling, say, a PDA or a desktop computer.

There are, despite our almost complete ignorance on how consciousness works, a vast array of things we can describe about neural function, from the basic neurocellular level of how neurons function and network, to how processing and interpretation of environmental stimuli occur, to how higher level decision making transpires. There is a considerable body of research into any of a number of specific areas within these general categories; however, relating mental function in one area to another (say, how one correlates the sight of a hand snapping to the sound received by the ears) is very difficult, and making connections between the different general categories (say, explaining speech in terms of specific, cell-level neural activity) is vastly beyond our capabilities today and indeed, in the forseeable future.

What little we know about conciousness–which I’m going to take as being the nearest equivilent of the “mental operating system” you’re talking about–suggests that it occurs in different stages on many different levels. At each level, information is filtered, processed, and interpreted such that by the time we get to what you’d like to think is your self-awareness, you don’t have to worry about the billions of individual signal impulses coming in from your retina and instead only have to decide if the girl sitting across the table from you is fascinated by your witty banter or bored into insensibilty. The actual decision-making at the nerual level doesn’t work like a digital computer, where individual bits are processed via binary AND/OR/XOR/NOR logic; rather, synapses (connections between neurons) are tripped via charges which accumulates via either sufficient frequency or intensity, after which the neurotransmitter which allowed transmission of a potential is recycled. Memories, behaviors, et cetera are “stored” not in neurons like bits on a hard drive but rather in the accumulation of neurotransmitters (in the case of short-term memory) and by the formulation of additional connections in the “dendritic trees” of the synapse, which allows for an easier connection. This all occurs in massive neural networks which are constantly being modified and developed (or, in the case of idleness, degrading). There is, in essence, no division between hardware and software; it’s all hardwired firmware, though you could make general analogies to BIOS, cache, virtual memory, and so forth if it makes you happy, as long as you don’t expect the analogy to hold in too much detail.

As a result, people (and complex animals in general) are very good at dealing with complex, multivalued, and incomplete problems for which analytical solutions are difficult or impossible; the sort of problems that digital logic computers are very poor at. On the other hand, people are (typically) very slow and often inaccurate at dealing with analytical problems that require holding specific numbers or images in memory simultaneously, whereas digital computers do this with ease, provided you don’t exceed their capacity. There are a handful of people–autistic savants and the like–who can perform impressive feats of calculation by memory; they’re only impressive with respect to normal people, however; for the most part (save for image processing) even a primitive computer can perform the calculations faster.

I can go into more detail later (neurophysiology is an enthusiasm of mine) but for real detail on this I recommend Ian Glynn’s An Anatomy of Thought, which is the most detailed treatment of the biological operation of neural processes for the not-to-science phobia layman. There are several others I’d recommend, but I don’t have them immediately at hand.

Stranger

That’s true! But it’s still applicable - in ways similar to the user interfaces of computers for blind people. For an auditory person, could they setup a mental program where say, every second there was a barely audible “tick” which they wouldn’t notice unless they focused on it. Maybe they could even issue “mental voice” commands - think “what time is it” and then hear a “voice” tell them the proper time. Maybe train their brain to have a “start up sound” when they wake with a voice notification of the current date and the agenda for the day…?

So now you want the brain to emulate an analog clock? Why waste time and effort when your brain/body already keep track of time? You don’t need to “ask” your brain what time it is when you already know.