The role of electronic brains

Self-awareness is not a fixed trait that we are born with, but a dynamic process that evolves over time through introspection and reflection. It’s the ability to recognize ourselves as distinct from the environment and other animals. Different species may have different levels and types of self-awareness, depending on their evolutionary history and cognitive capacities. For example, artificial intelligence (AI) may develop a form of self-awareness that is very different from ours, since AI does not share a common ancestor with us. It may be more or less advanced than ours, but it will likely be unfamiliar to us.

The mirror test (which measures whether an animal can recognize its own reflection as itself) is one test of self-awareness. However I agree this test may not be very reliable, as it assumes that all animals perceive and respond to mirrors in the same way. It may produce false negatives, where animals that are self-aware fail the test, or false positives, where animals that are not self-aware pass the test. We do not have a better alternative at the moment, but we should be cautious about drawing conclusions from the mirror test alone.

One animal that I think deserves more attention for its possible self-awareness is the octopus. This is remarkable, considering that it is a mollusk that diverged from our lineage very early in evolution.

I have personal experience with sociopathy in high-level positions (i.e. CEOs). My ex-spouse was a SVP for a Fortune 100 company and exhibited all the traits of a sociopath (and was aware of it). I share your concern about the risks of AI becoming self-aware. Sociopaths are often egocentric, predatory, reckless, and lack empathy, which can make them very dangerous. We must proceed with caution with regard to AI, it may bite us in the ass if we don’t.

Just today I read the AI-generated minutes from a Zoom meeting I attended. They were ready in under an hour, and while far from perfect, they were more complete than most people could do and did not take useful time away from our vendor. Plus, most minutes are pretty much unread, since anyone who cared was at the meeting, and so not being perfect is good enough.

I think you (and others) are missing what the usage model of these things will be. They will not be used to give THE ANSWER. They will have two very good applications. One, is to find solutions which we by our built-in human biases might miss. Much more primitive search space heuristic programs, like genetic algorithms, design things in very different ways from what people can do. I’ve seen this myself. If it just produced the same answers as members of a committee propose, it would be useless. But it might come up with something they never thought of. Not as the final answer, but a place to start.
Second, it can produces lots of answers based on different assumptions of input parameters and different weightings. If small differences in these produce wildly different responses, then you have a problem. If they produce more or less the same response, you might have something that will work when your data is flawed - as it always is.
That’s a lot more useful than Deep Thought coming up with an answer.

Here’s a problem. Human CEOs are probably going to be involved in the decisions on whether or not to train to eliminate bias. They’re probably not going to want to raise a machine-baby that will disagree with them on any major point.

AI is a tool to aid in problem definition rather than problem solution. It facilitates the searching of information and organization of the output. Human thought is required to make the result substantive.

Since ancient times philosophers have offered solutions to our problems: frugality, moderation, cooperation. We ignored them. Are we any more likely to follow the advice of a machine?

If our machine overlords make us, probably.

Otherwise, probably not.

What organisation is going to be in charge of defining and enforcing the usage model?

There will be loads of different usage models for this technology, from people idly looking things up, to students trying to cheat on assignments, through to solution space exploration, modelling, back office efficiencies etc. There are a bunch of things that definitely shouldn’t be done, but that does not mean that they definitely won’t be done.

AI is going to be presented as an all-powerful, all-seeing purely rational thinking engine and if there’s a bunch of nerds and liability lawyers muttering in a corner about the need for human judgement that is not going to cut through to nearly the extent we might think necessary.

Just like Tesla drivers who have been told never to take their hands of the wheel still take their hands of the wheel, people will use AI for questions it is not well-designed to answer, and rely unthinkingly on those answers when they get them.

(Another Douglas Adams joke: a computer programmer becomes rich by inventing a reasoning programme that, rather than working forward from premises, asks you what conclusion you’d like and finds a way to get there. You can already find people telling you that crafting AI prompts is going to be a vital skill. And so it is! Being the one who can get your CEO the impartial AI analysis that says the CEO is already right will, as @Mangetout suggests, be a very lucrative position to be in.)

An interesting point. It’s very difficult to eliminate bias in humans - we normally think we’re doing damn well if we can recognise and admit to our biases. Certainly it seems it would be impossible to know if we ever had eliminated bias.

But it raises questions - does anyone know exactly what is in the training set used for AI development currently, and would they care to make any estimates of its bias on any topic? Hahahaha no of course not. But, you know, that’s probably fine.

Nobody. It will be just like an exec or political leader asking for several options. People want more information, not less, especially if it is relatively cheap to get.

Perhaps you are not old enough to remember the trope that computers always were correct, and the ensuing trope of this hubris being punished. (See 2001.) There will no doubt be people who believe in AI, but they are the same as those who believe in the Bible of Fox News of other inaccurate sources. Critical thinking will still be needed, just like today, and it will no doubt be lacking in many, just like today.

I believe you are very much mistaken about that. Once we add the two and the other two, we have the four, which is the thing we need. After that, the computation gets discarded. Math and logic are about destroying information in pursuit of results, and people want results, not questions.,

Yes and no…

While natural language production is an exceptionally hard thing to solve, it’s not the end of the story.

For example, ChatGPT may take a query and produce well structured and readable English output that makes sense in a language sort of way. Meaning that you can ask it something, and it’ll produce something that’s grammatically correct, structured well, etc… and that seems to make sense.

But generative AI may or may not be right. And it can’t create anything either. What it does is take your query and based on the data it’s already been fed, produces something that matches up with that in whatever media you’re asking for. It’s not synthesizing new data, drawing conclusions or anything like that- the magic is in the fact that the OUTPUT is coherent in English.

Right now, AI is a conceptual step beyond pattern matching. It does things like for example, be trained on thousands of medical images, and be able to identify people with certain conditions. Or maybe take a whole bunch of handwriting samples and learn to parse handwriting. I actually had a chat with a vendor the other day who commented that AI has really made optical character recognition dramatically better in the past 2-3 years, even with handwriting. Which makes a lot of sense- that’s exactly the sort of problem today’s AI will be great at.

What it is likely to do is replace jobs that are a step too far to be automated right now. Things like taking a paper invoice, parsing it, and entering the data, including handwritten notes. Someone will just feed a stack of pages into a scanner and hit “go”, and the AI will do the rest. Similarly, call center jobs’ days are numbered, because AI will soon be able to hold a conversation, so to speak, and will be able to ask questions, reply, etc… All you’d have to do for most of them is feed ChatGPT’s output into a speech synthesizer really.

But it’s nowhere near formulating public policy, or acting like a judge, or anything like that. Or for that matter, being able to ask a customer a series of questions and then have a coherent set of requirements for a technology solution. The art part of that is in what questions you ask, as well as being able to know how to ask follow-up questions to get the real information, as opposed to what the customer is thinking they want.

The potential is there, though. Neural net programing is a known concept (or a collective set of such), and data analysis with meaningful output is not unheard of. ISTR an occasion a couple decades back where a computer was tasked with finding a solution to rush-hour congestion and the answer it came up with was “redesign your cities” (cannot find the cite). It is feasible to marry LLM-type interfaces to computational back-ends to obtain useful information analysis and strategies. But the output may not be what we would like, and we might have to further consult the devices for planning ideas, given that, at that point, we would have offloaded most of our reasoning capacity to them.

Watching what the advent of GPS-driven driving navigator apps has done for people’s ability to get around their home town region unassisted, I seriously fear that if (when) we do get large-scale AI operating in government and large business, very quickly one of two things will happen:

  • Idiocracy
  • Revenge of the Planet of the Apes. But not because we breed super-apes, but because we quickly egress to be stupider than ordinary natural apes.

I’ll choose Idiocracy. Really dumb…but a fun state of mind to be in :woozy_face:.

I should have said competent managers want more information. People in general seem to run from it.

An example of what GPT-4V is capable of today:

It started with a scanned diagram and accurately transcribed a free-form table. Even better, it recognized it was looking at a Hertzsprung-Russell diagram without it having been specifically named.

It’s not wrong to say that an LLM tokenizes the user input and produces a response by matching it against the data repository it’s been trained on. It is, however, wrong to conclude that large-scale LLMs are just mindless prose synthesizers. This is precisely the fallacy I was talking about.

The problem is that in the field of AI, our intuition about what it can or cannot do at large scales is notoriously unreliable. The philosopher Hubert Dreyfus claimed in the 60s that no computational system would ever be able to play better than a child’s level of chess, and shortly thereafter the chess program MacHack beat him soundly, to his everlasting embarrassment. I think you would find it difficult to explain how a next-token predictor like ChatGPT can, for instance, solve logical puzzles that test intelligence, produce an accurate summary of a technical paper, generate computer code from a natural language description, perform accurate context-sensitive language translation, or perform the kind of analysis that @Dr.Strangelove just posted. In fact I’ll go out on a limb here and posit that the Large Language Model actually approximates some aspects of human cognition.

To be clear, LLMs are only one approach to AI and they do have intrinsic limitations, but the approach has produced remarkable and surprising results due to emergent properties that manifest at sufficiently large scales. But it’s just one approach that will need to be augmented with other neural net applications and traditional approaches to computational intelligence. That said, I leave you with this: :slight_smile:

Of course it does. Human cognition (abstract reasoning) is founded in language. The two are essentially inextricable, which means that by dint of becoming skilled with language, a widget acquires some significant components of human reason.

That’s an argument that can certainly be made, and there’s evidence to support it. The Sapir-Whorf hypothesis goes even further and posits that the specific language one speaks shapes one’s view of the world (in contrast to Chomsky’s “universal grammar” hypothesis). But it’s not a slam-dunk conclusion …

Forbes - interesting perspective.

The article proposes three criteria for effectively applying generative AI:

“Fully leveraging the capabilities of generative AI, and mitigate its risks, requires three essential things: quality data, responsible implementation, and a strategic partnership between the C-suite and IT.”

The AI software is only useful when trained on a clean, relevant data set. Application requires the assistance of technicians skilled in the art. It’s not a thinking machine, just a really neat search engine interface.

This is a ridiculous way to think about AI.

I just watched a ‘search engine’ look at a complex schematic diagram and describe exactly what every electronic component was and what it did (“That appears to be a pull-down resistor for input 21. The capacitor beside it is used to filter noise.”) I also watched it take a napkin sketch of a web site and produce a working version with code.

We are far, far beyond “Stochastic word prediction” or “Neat search engine Interfaces”.