Programming question

Can computers be programmed to recognize reactions to their success or failure in achieving a goal? Is their an equaivalent to bad or good that can be programmed into a computer/ ( favorable reaction to something vs non favorable reaction)

What do you mean?
Can a computer be “happy?”
They can certainly be programmed to act like they are happy, but that’s not the same thing.

It sounds to me like you’re asking about the field of computer science called machine learning.

Right?

Possibly, I am trying ti find out if there are any kind of near eqivalents we can prograam into computers that might correspond to a human emotion.

Example might be we give a computer a series of goals with preffered outcomes or assigned priorities. When one is accomplished a series of other opportunities to solve outcomes presents itself. I hope that makes sense.

That is not emotion it’s just numerical analysis. At best electronic computers could simulate emotions, which generally would be of no use to anyone. An emotional computer would be doing the opposite of what you are considering, an emotional computer would stop operating efficiently for based on random factors. Computers have no wants or desires, they don’t feel good or bad, they do what they’re programmed to do. People are already emotional, there’s no need to have machines act that way also.

This is where it would appear. You can simulate “good” and “bad” responses to behavior (computer deleted my resume - bad, computer showed me a cute cat video - good), or grade the computer’s behavior on a spectrum (e.g. 0-100, letter grades, whatever you want). Machine learning shows some promise in the world of machine translation (between human languages), but it isn’t that good for most other things. If you can program a neural network to practice medicine (Bad computer! Amputating right foot not correct treatment for sinus infection! Bad point!), there’s a doctorate for you.

It is tempting to want to try this out, isn’t it? You could have a big red “punish” button that you could hit every time Firefox stops responding.

It sounds like your talking about error trapping. It’s done all the time.

In VB.net -

Try

----Do some stuff, complete with al kinds of redirection but return here. Do other stuff as needed.

Catch

----The try didn’t work the way I wanted. ‘Catch’ the exception and read the error. Maybe do other things based on the error. Or just write log files.

Finally

----The initial Try worked. Do some clean up work here. Close objects, disconnect from databases, call the next routine, whatever.

End Try

Of course you can nest all of this in extra Try-End Try blocks

Is that what you’re talking about?

Yes, this is what I was reffereing to. Stephen Hawking recent warning about AI is what I am thinking about here.

Heh. ‘Contact Us’

It’s sort of funny, I run a County GIS web site. And yes, we have a ‘Contact Us’ button that sends what ever they ask, or rant about right to my desk. I can often respond within minutes, and either direct them to the correct department or answer their question. Users are often stunned to get any response at all, let alone a prompt one. It’s kinda fun.

It’s good info for me too. If enough users are confused about something, I can fix it. Well, sometimes… There are some that, well… Wish I could through a shoe through the screen. Only a matter of time I guess :smiley:

Nine times out of ten the problem is a misplaced semicolon.

Okay, now to read the OP …

That is just exception handling - a common idiom in most modern programming languages. What the OP describes sounds to me much more like a goal directed search strategy which is common in machine learning and, when combined with heuristics, artificial intelligence.

This is extremely common, and has nothing to do with AI. In branch and bound search strategies, you often have to decide between two alternative paths. If you try one, and fail, you come back and try another.
You can change the criterion by which you make a selection based on your experience. Say you are solving a maze and randomly pick the path to try. If you discover after a while that 80% of the times you choose the right path you have to go back, you might bias your choice for the left one.
This kind of thing is used in AI but isn’t really AI.

Tiny quibble - numerical analysis is a specific field in computer science dealing with methods of doing numerical computing. I had to take it in grad school because it was on my quals and something not taught by my undergrad school, and I still feel the pain today.

Sure. But it’s all yes or no in the end.

Lets say you have Google maps zoomed in to central Illinois. And search on Springfield. Due to the current coordinates that you are zoomed into, you will receive Springfield IL as an option before say Springfield Texas. All simple yes/no True/False routines. Goal directed search strategy? Sure. I think that’s a good example.

Sure, you can have a ‘machine’ write code for itself based on previous results. It all comes down to simple yes or no results.

A ‘maybe’ is a result of a true or false.

It’s unclear at what level the OP is talking.

We’ve had threads and threads about whether, in principle, strong AI is possible. That is, could a computer become “conscious”, whatever that even means in humans.

Pretty much anyone with a formal education in computer science says “Yes, of course. The only question is when. It’s probably won’t be in the next decade. Neither will it take until the year 3000.” Meantime folks who know nothing of the science say “No, never. Consciousness is fundamentally a human trait and no mere machine can ever or will ever have it.”
There are all sorts of machine learning processes in use today which rely on the idea of “reward” and “punishment” for good or bad results according to the goal metric. I don’t think anyone today would argue that the machine (or more precisely the learning software) “feels” pleasure or pain when it receives a score for its latest attempt. But it 100% certainly does alter its behavior in response to the feedback.

Which is essentially Skinnerian conditioning. Which, when applied to animals, is considered absolute proof of learning, tending towards proof of (something resembling) emotion, and certainly not proof of consciousness in the subject animal.

So back to the OP: tell us more about what you’re trying to ask.

Self aware, is how I define it. And that has it’s own pitfalls and arguments. Computers are programmed today to ‘save’ themselves.

<snip>

 I will try and explain what I am asking. I realize computers will never have a true feeling of any kind. I do believe that humans respond to electro chemical reactions produced by various exposures or behaviors. Computers would likely never be able to experience the chemical part of this reaction. 

 I am trying to find out if a type of logic could be developed that could cause a computer to respond in a way that could almost simulate a reaction based on a reward or punishment or. For instance a goal was programmed into a computer that had a long series of complicated criteria that would need to be met to achieve this goal. The computer was also programmed to know its location relative to the goals it had. If a behavior moved it further away it would reject the behavior if it moved it closer it would accept the behavior, something along these lines.

Yes of course. Not what I meant, could have been clearer.

A commonly held view is that a system is indistinguishabale from its complete mathematical representation. In other words, if you imagine a supercomputer simulating our universe completely, the creatures in that simulation and their emotions would be just as real as that of the “real” creatures.

Accepting this, there’s no qualitative difference between a living creature and a computer program. I don’t know how far away AI is from emulating organic brain. Is state of the art near the brain of a small insect, advancing toward that of a beetle?

Of course. This is explicit in many systems involving computer learning. Backprop is just one of many examples, where rewards are explicitly fed back. A backprop-programmed backgammon program was rewarded when its play imitated human experts, and eventually ended up beating those experts.

The trouble is that those aren’t rewards based on emotions. It’s just numbers again, a computer has no desire, it doesn’t operate more efficiently based on reward or punishment in the unconscious manner of humans. Perhaps there might be biological computers that might do something like that.

All of this has little to nothing to do with what Hawking is talking about. He’s talking about a point in the development of AI where we no longer understand and effectively control what a machine is doing and as result the machine may do things we consider harmful. There doesn’t have to be anything like a concept of human emotions for that to occur.