Programming question

I always liked what Niven had to say about this:

I think genetic algorithms may be close to what the OP is asking about.

Also, perhaps more specifically on point, is genetic programming, a subset of GA:

In GP, individual computer programs are “punished” or “rewarded” according to how well they satisfy one or more fitness tests with a long-term goal of developing a more optimized solution, which, to me, seems very similar to what the OP was describing.

And it’s all based on yes or no. True or false. Be it organic or CIA Skunk works.

A similar case is a section in many help systems that says something like “Did this answer your question?” with an option to enter a response.

Unfortunately, that’s why users are so often stunned at enipla’s prompt response – most of the time you never get a response at all, or at best a very delayed one. But the possibility is there.

Some companies count the “no” responses to such questions, and use those to identify the sections of the help system that need rewriting. And some shareware projects count votes on bug reports, and use those to prioritize working on bugs.

IMO that is completely erroneous thinking with zero basis in fact. Conscious computers are in our future, and their “feelings” will be exactly as real as yours are. The only meaningful debate will be how real that really is.

As I said, that is bog-standard technology today. That’s exactly the kind of logic that drives neural networks, self-driving cars, autonomous drone aircraft and submarines, etc. And lots and lots of algorithms used in machine vision, finance, logistics, scheduling, etc.

Late edit: Substitute “fuzzy logic systems” for “neural networks” in the last paragraph. Brain fart on terminology on my part.

Or more precisely, a mistaken weighting on a reward/punishment tree traversal led to the wrong end-point in my main storage. With the error noted, the meta-scoring system has been updated to re-weight the branches at that point and I’ve rerun the search, arriving at the more correct end point. The satisfaction metric has thus been improved for future use.

It is all over my head but exciting stuff none the less.

First thing, something can’t be “more optimized.” I’m sensitive to this because in grad school I worked on a subject which used to be called optimization, but which we managed to get called compaction, which was more accurate.
Genetic algorithms are so popular around here because the contain the term “genetic” and we all love evolution. They are just one category of search space heuristics, and not even that great a one. There are many others.
Here is a generic way of looking at these, not referring to biology.

Say your goal is on top of a hill. From some starting point, you emit a bunch of probes. If a probe is below where you are, you stop it. If it is higher you keep it, and have it send out more probes.
But this won’t necessarily find a solution. Say you start on hill 1 but the solution is on hill 2. You will never find your goal by just going up. So, you must keep some probes that go down, and you must arrange for some probes to go far away from where you are, to perhaps find the hill with your goal. (Being stuck on one hill is called local optimization.)
No rewards, no punishments.

OP may be interested in this video of computer-simulated robots learning to walk.

Here is an actual robot learning how to walk.