Artificial Intelligence & biasism

The smoothest path to AI seems to be neural networks, which one just feeds data and they “learn” how to manage the information. No tedious coding, just let the machine do all the work.

Notoriously challenging to debug, though. And, it appears that the learning-machines are heavily dependent on what data is provided.

So, in the end, we instill these “smart” machines with our own stupidity, an effect that tends to be somewhat less pronounced in classical hard-coded machines (though not entirely).

We want the devices to be reasonably familiar and understandable, but really, what is the point of machines that are essentially nothing more than electric people? My feeling is that, if there is something of value to be gained from brainy devices, there operative functioning should differ significantly from us, so that uncharted avenues of thought can be explored.

Artificial General Intelligences are something from science fiction stories. There is no real evidence that they ever will be practical to make, or even if we will ever know how. This is like arguing about the flavors of food pills while there are no food pills.

This is not really the case. Neural networks seem to be increasingly adept at handling matters related to perception, which is a huge fraction of the problem. What we have to do is define “intelligence”, because classical machines can handle massive volumes of abstract information better than we can, bugs notwithstanding.

So far, the only thing machines cannot do is take initiative, but that appears to be a matter that arises from a combination of external exigencies and biochemistry. It may be possible to get very close to making such devices happen, if our ecosystem can sustain us long enough to get there.

If you want to program a computer to play chess, or poker, or whatever, you don’t code in any heuristics like in the 1950s, instead you have the neural network learn to play well on its own. That way it does not have any human conceptions, well-founded or not, baked in, and indeed humans can learn something from the results.

Yes, there are instances in which neural networks will just learn people’s prejudices and start operating on them - it’s a live issue in the same way that “people have prejudices” is.

So for instance there was a recent instance in which a couple of lesbian youtubers got their channel shut down by an algorithm that said their content was (IIRC) too adult for their target market (young adults figuring out their sexuality) and they successfully fought this on the grounds that if their content had been heterosexual they would never have been considered ‘adult’ - they didn’t have any real adult content, they were just discussing relationships. And they were totally right about that.

But inasmuch as the algorithm was probably proactively shutting down "channels likely to get a lot of complaints ", the algorithm was right too! Channels run by lesbians about same-sex relationships ARE more likely to get complaints than other channels. It’s just that in this case they’re the sort of complaints we want to ignore. But the algorithm isn’t going to know that unless we tell it, or explicitly train it to recognise "kinds of complaints we ignore " as well as "kind of content likely to generate complaints "

Like all computer systems, getting modern AI to do what we want depends on CLEARLY telling it what we want. Which is actually hard.

This really is the case.

Not really. We exist, therefore we *know for a fact *that it’s possible to create human-level general purpose intelligence. If AI was impossible then we wouldn’t exist in the first place to argue over the matter.

You will note that I said “practical to make” and “if we will know how”, not “impossible.” The problem is that the human brain is a hugely efficent dense bundle of nanotechnology vastly more advanced than the two-dimensional slabs of silicon that we use for our simple idiot-savant computers. Sure, it is in theory possible to build a human-level artificial general intelligence, but if building it takes a million high-end video cards, fills a stadium, and requires its own nuclear reactor that isn’t practical. And just because it is theoretically possible doesn’t mean that we will ever figure out how to make this thing that is vastly, vastly more complex than anything that humanity has ever built before, wherther we do it with chips of silicon or with modified bags of proteins and nucleic acids.

Things that people consider weaknesses about human brains could end up being close to optimal given the goal and task at hand. Meaning that we can picture a more perfect solution that doesn’t make the mistakes we make (or biases), but it’s possible that a system that avoids those mistakes has just made a trade-off that results in an overall worse solution.

For example, animals tend to be conservative and assume the worst when incomplete sensory information triggers red flags (e.g. possible predator). A machine that gets it correct more frequently by eliminating many false positives might also miss that one true positive that really matters (resulting in the machine getting eaten by a tiger and spat back out).