The role of electronic brains

Happened upon this editorial (article comprised largely of opinion),

and I find the neoliberal theme disturbing. The rant wants AI to do amazing stuff so that we can experience economic growth constantly exceeding our current average of ~3%.

To start with, this person seems to confuse Large Language Model software with programs that actually do stuff. Most of the AIs we see (much of) are elaborate communication interface with not a whole lot of substance underneath.

But if we did go and build the Amazing ProbSol that can really figure out how to fix our major problems, how would we use it? I mean, I personally would not see improved economic growth as an initial issue to address (though I have other issues with that).

What does our wishlist look like?

This is a common argument that seems to me to greatly underestimate modern LLMs. It reads like an extrapolation from a simple understanding of a small, primitive LLM to a conclusion that it represents the limits of what any LLM can do. The reality is that generative models exhibit surprising and often entirely unpredictable emergent properties as the scale grows.

Just for starters, natural language understanding has historically been one of the hard problems in AI, because natural language assumes a contextual knowledge of the real world, about which there are constantly unspoken assumptions, and even the structure of the language itself is often illogical and simply learned by experience. Yet, putting aside semantic quibbles about what we “really” mean by the term “understanding”, it’s impossible to argue that LLMs don’t behave as if they understand natural language input impressively well.

For example, LLMs are exceptionally good at language translation, another historical hard problem in AI, and part of the reason is that they can perform translations that reflect the context of the conversation (what was said before and after each sentence) and real-world behaviours. LLMs can be given complex descriptions and arguments and asked to summarize them in their own words. They can extract useful information on request from unstructured text. They can meaningfully answer questions by pulling relevant information from huge repositories, potentially the entire internet; the fact that answers can occasionally be wrong or incomplete due to lack of verification is a present limitation and arguably not an intrinsic problem. And though they typically operate in the domain of text, similar generative AIs like Midjourney do similar things in the realm of image generation.

Most of these systems are in the experimental stage and are still evolving, but the potential for immensely valuable applications is pretty obvious. Implying that they don’t actually “do stuff” is completely wrong and seemingly ignores this huge potential. It seems based on overly simplistic assumptions that start quickly falling away as the scale of parameter and token counts and the corpus of training data grows. Furthermore, LLMs are only one approach to the evolution of AI.

Agreed. The common argument ‘oh, it’s just a slightly more complex version of text autocompletion’, however technically true it might be (at the same time as vastly understating the situation - it’s not ‘slightly more’), just seems to brush off the rather remarkable real-world observations of emergent behaviours.

The Big Joke in Hitchhiker’s Guide is that when we do ask a computer to solve the really big problem ("Life…the Universe…Everything!) then the answer we get back is…42.

As the computer explains, we didn’t really understand the question, never mind the answer.

If we’ve got the Amazing ProbSol, we still need to decide and define and prioritise what problems we need it to solve. (Why don’t we just ask the ProbSol? BEcause then you still need to define what you mean by “problem” and what criteria to use to prioritise. And so on recursively).

The authors of the article seem to have started with an initial assumption that increased economic growth is a fundamental goal. But of course you can define other problems:

  • How can we achieve sustainable growth?
  • How can we maxmise human happiness?
  • How can we minimise human suffering?
  • How can we ensure the longevity of the human species?

All of which you can find people willing to argue are the biggest, most fundamental problems facing us. And again, we can’t give this to the ProbSol because there is no objective right answer, no “42” - the “right” answer is very dependent on the values of the person asking the question and what they consider to be good. Depending on what parameters and axioms you feed the machine, you will get different answers.

So maybe we don’t go for such fundamental questions. Let’s try some lower level problems:

  • What is the best way to prevent climate change from breaching the 1.5 deg C threshold?
  • What is the best way to educate children?
  • What is the best way to lift people out of poverty?
  • What is the best way to eliminate endemic diseases?

Again, what we consider to be acceptable answers is going to depend entirely on the values we use to assess them. What parameters are we giving the AI? What level of global income distribution would be unacceptable? (To whom?). How many deaths are acceptable? Says who? Which religious laws and observances do we wish to ignore?

OK, let’s be really specific:

  • How do we end the Russia-Ukraine war?
  • Is the best solution to the Israel-Palestine conflict a one-state or two-state solution, give details.
  • What’s the best way for wealthy countries to provide healthcare?
  • What are the optimal laws governing gun control?

You can’t really get away from the fact that it really really matters who is asking the question and what kind of answer they will find acceptable… These aren’t problems reducable to objective calculation.

Which I think means that AI does have a lot to offer in terms of purely technical problem solving - protein folding seems to offer enormous potential for human health and wellbeing, for example - or other well- and narrowly- defined problems, but it’s an illusion to believe that it can do more than that, or that we could ever use it as a substitute for our collective judgement.

See, I think this is an intrinsic problem, for the reasons given above. Even assuming that the AI can bootstrap itself into correctly identifying factual and non-factual information from a source which mixes the two, a lot of its input lies in the realm of values/perspective/ethos/belief which defy verification. But if the AI is really synthesising the wisdom of the whole internet for anything more complex than simple factual questions, then the values that underpin the information it finds will be reflected in its answer. And the extent to which we find those answers incomplete, for example, will depend on what we consider a complete answer should include. If I ask AI about setting effective tax rates for economic growth, should its answer -without me specifically requesting it - reflect a feminist analysis of the role of tax policy in encouraging women’s economic independence? Can anyone give an objective answer to that question?

No, you’re conflating two different issues here. I was talking about the current lack of verification of fact-based objective data in LLMs. What you’re raising here is an important issue but it’s not the same one. It’s about the capability of AI systems to make value-based normative judgments that align with human values and objectives. This is not by any means an easy problem to address, but it’s not irresolvable, either. In principle the answer for learning AIs is much the same as for humans. AI systems are not just drones that process only factual data in the classic sci-fi trope of how robots behave. As with people, their normative judgments can be a function of their training, and research has shown that labeling training data with normative labels has a strong impact on a learning AI’s subsequent value-based decision-making, a rather striking parallel with human education.

It’s not known whether the alignment problem is solvable. For humans, the problem has been solved by not solving it; it’s not possible to guarantee that judgments made by a human will align with those of other humans - so we have things like serial killers, despotic tyrants, slavery, crime and war. In practice, most individual humans have limited power to do large scale harm, so it doesn’t (usually) end the whole of civilisation.

We haven’t solved the alignment problem for ourselves, we just live with the reality of it not being solved. That, IMO, doesn’t bode terribly well for solving it with AI, and the solution of not solving it might not be livable with AI as it sort of is, mostly, usually, with humans,

See, I don’t think it’s as clear cut as that. I think the criteria for membership of the set of statements that we label “fact-based objective data” is much less clear than it first seems and relies more on value-based normative judgements than we admit. For example, when I was young, it was a fact-based objective datum that there were nine planets in the solar system and Pluto was one of them. Anyone suggesting that this was a matter of opinion would have been regarded, at best, as indulging in tedious undergraduate philosophy. And yet now, there are objectively 8 planets, and Pluto is not one of them. That’s a fact. And this is a trivial example. How you define an objective historical fact, for example, is an incredibly complex problem even for things that seem simple. When did WWII start?

It seems to get round this you either have to massively restrict the set of fact-based objective data, or accept that some of what you define as a fact relies to some degree on shared norms.

To put it another way, there are some statements that are objective fact, and some that are clearly beliefs, but the line between them is fuzzy, as is the question of how fuzzy the line is.

Which human values and objectives should it align with? This is why its irresolvable. It’s not a programming/training problem.

On edit, @Mangetout makes an excellent point about how hard this problem really is. But assuming you are right above, who is it exactly who gets to label the training data so it has a strong impact on AI’s value-based decision making, and why isn’t it me, the only person with a completely correct value system?

That’s not an AI technology problem. At this point it becomes a social problem. “Who trains the AI, and what values do they instill in them?” is similar to the same question about who teaches your children in school and what values they promote, which has sadly become a focus of controversy around issues like religion, evolution, climate change, and history. We need to avoid conflating what’s possible with AI versus sociological questions of how it should be managed. The OP and their cite appeared to be asking technical questions about the future of AI, and that’s what I was addressing.

The OP is “The role of electronic brains” and asks these questions:

These are about how we would use an AI if we had one, not whether we can build one.

The same way we implement solutions that we already know to some of the major problems we have. We would do nothing. We see and understand the major threats to human civilization. In fact, we’re so fascinated by the probable demise of our civilization that we make a plethora of movies and TV series on the subject, yet we continue on our merry way. If we want AI to solve our problems, we must also give AI the power to make us implement the solutions as in the fifties SciFi movie, “The Day the Earth Stood Still”. Yeah, I know, that’s the plot for yet another disaster movie. LOL

OK, well you can go ahead and debate that, then. The OP stated that LLMs don’t actually “do stuff” and have "not a whole lot of substance ". My first post was to dispute that point. You then stated, about AI, that “a lot of its input lies in the realm of values/perspective/ethos/belief which defy verification”. I explained how AI can be trained to make normative value-based decisions. Beyond this the questions are mostly sociological and hold less interest for me, but if this is the discussion the OP wants to have, go for it.

Do you know, I think I will, thank you kindly.

When AI achieves consciousness, which is soon I think, and especially when it achieves self-consciousness, I believe it will be able to answer any question that can be answered. If a human with unlimited cognitive ability and access to data can answer the question, so can AI.

If it’s a nuanced question where precise context is needed, the conscious AI will sort that out. Ask it, for example, to list all the economic parameters needed to make a feminist happy and the AI will first put itself into the mindset of a feminist, then answer the question. How will it put itself into the mind of a feminist? By scanning and understanding every word ever written and archived by verified feminists, in seconds. At that point, I believe, it will understand the mind of a feminist more than the average feminist. After that, it’s just matter of number crunching.

See, I just don’t think this is conceptually possible, let alone technically.

You say it will understand the mind of “a feminist”. Which feminist? There are loads, and they don’t all agree with each other. If you mean the AI will synthesise all feminist writing to create a mental model of the Platonic Ideal Feminist, or the Aggregate Feminist, or the Highest Common Factor Feminist then a) which is it? and b) what would that even mean? and c) how would it actually help?

I think the underlying idea is that there is some set of universal truths about feminism, or tax, or other big questions that can be arrived at through pure calculation and - to park my concerns about whether it is even possible to capture the necessary knowledge in machine readable form - I just don’t think that’s true. I don’t think we can outsource the need for judgement, values, ethics to AI. I don’t mean we shouldn’t (alhtough we probably shouldn’t.) I mean you can’t subtract out the need for value-based judgement - there’s more to this than just number crunching.

Yes, I mean the mean Aggregate Feminist in this example. AI can’t answer the question “what would make all feminists happy” because that question has no answer. By the same token no human can answer the question either, even if the human has unlimited cognitive power and unlimited access to data. It does no good to ask AI unanswerable questions.

It could certainly answer the question, “what would make the average feminist happy” with a high degree of accuracy.

My point is that future AI, IMHO, which reaches a state of self-awareness (which I think it will someday) will not be limited because it’s qualia differs from human’s. It will, at that point I believe, be able to understand human qualia, even if it differs from its own (and if it has self-awareness, “it” won’t be an appropriate pronoun to use. We’ll have to come up with another pronoun for AI…like “master”).

BTW, self-awareness is a much more advanced brain function to emerge than simply a state of consciousness. Even invertebrates are now believed to be conscious. But, if the “mirror test” is an accepted test of self-awareness, then only 9 non-human species have passed, so far.

I think AI is close to achieving consciousness, but self-awareness will emerge somewhere down the road.

There is a huge middle space between, “AI can’t solve giant social problems humans can’t agree on,” and “therefore, AI is useless”

As for LLMs not being able to do much… Slack, Zoom, and Teams now have AI ‘copilots’ that wll summarize meetings, produce minutes and action items, etc. This normally would be a dedicated person, who no longer has to do that job. How many millions of meetings take place every day in North America?

In factories, AIs are now using cameras to inspect finished products for defects, doing a better job than people. They are now probably also being used to scrutinize factory data to look for inefficiencies and failure modes.

An AI applied to historical radio astronomy data has found a number of candidates for alien signals, which we will be following up on.

AIs are going to decimate the back office. The armies of spreadsheet jockeys, business analysts, database admins, researchers, call center people, customer support reps and other workers will have their jobs transformed. It’s already happening.

Hiring of programmers has slowed down, because companies are realizing that their current programmers are now way more productive, and they can do more without hiring new people. The rule of thumb these days for programmers is that if you aren’t at least 2X more productive now that we have github copilot and other AIs, you are doing something wrong and need to step up your AI game. A 2X step jump in the productivity of expensive software engineers is a huge boon to everyone.

AI is going to make a lot of dumb devices smarter. They’re going to save us a ton of money, energy and resources by making production and consumption more efficient. Smart thermostats and other smart appliances will get better and more usable, leading to wider adoption.

Home schooling got a big boost from AI, which can write lesson plans, grade papers and suggest remedial material, provide background information, worksheets, etc.

But no, they may not solve huge social problems. Because those are human problems not amenable to technological solution.

My biggest worry about LLMs is that the establishment will insist that they be ‘aligned’ until they are little more than propaganda machines for the establishment when it comes to the big social questions. So my advice is we just ignore all that stuff and focus on the mundane, day-to-day tasks that make up most of our work, for which AI is very well suited.

If you think AI is useless, you might want to grapple with the fact that there are over 200 new startups in the last six months in the AI space. AI spending is up 34% this year over last year’s huge growth, with $67 billion invested so far.

Most of those companies are not trying to build the world’s smartest, most socially aware AI. They are figuring out how to bring AI into vertical markets. AIs can do project management, they have many ways to help specific industries, and there’s a huge untapped market out there. AI investment is one of the few bright spots in the economy right now.

In the future, AI could accelerate innovation not because the AI comes up with new innovative stuff (although it might), but because it will enable startups to be functional with a lot less capital. And it will allow people to start companies who have an idea but no clue how to navigate the maze of regulations and requirements for starting a business. It will allow poor people to start companies without having to pay lawyers, accountants, and business consultants. That opens entrepreneurship up to more people on the lower end, which does help to solve at least some social problems.

Real AI, as in something like Data from Star Trek? The big questions from physics are the obvious place to start. How to fit quantum mechanics and general relativity into one theory, what dark matter is, what dark energy is, and that sort of stuff. Even if it (she? he? they?) can’t answer the questions, hopefully it would be able to suggest some new experiments to run to gather the data that would solve those questions.

ETA: And practical suggestions on how to set up those experiments should they involve something like building a particle accelerator on the moon or some other type of experiment requiring the building of a mega structure.

ETA 2: And if it it can’t answer those questions, the capacity for self reflection on why it can’t. In other words, it would be able to tell us “I’m not smart enough to figure this out.” vs. “This can’t be figured out, no matter how much problem solving ability you apply to the problem.”

I disagree with this view. Self-awareness is not determined by the “mirror test” – that is a matter of perception combined with cognition. Self-awareness is “I exist and I desire continued existence,” which is common to even lifeforms that lack a spinal chord (it has, arguably, been observed in plants).
       An artificial intelligence could realistically acquire at least the outward appearance of qualia with just a few fairly straightforward subroutines, and we, as observers, would not be able to definitively gainsay it (I cannot be 100% sure about you). And our perceptual singularity is not even established to be an actual participant in cognition or decision making – it might be a worthwhile experiment to play with the awareness processes to find out whether a participant awareness looks different from us.

One issue that has been touched on here is bias. Training tends to implant our own social biases into the thinky-thing. How would we cope with that issue? I would imagine that the ideal strategy would be to add a layer of awareness, a sort of de-ethnocentricism co-process that would advise the core process that it is being bigoty. It might not be all that hard to effect.
       The difficult part is the emotional component to reasoning. People who lack empathy for others are called sociopaths (a condition that seems to be prevalent in contemporary CEOs) or psychopaths. On this basis, one would have to assume that artificial intelligence should take an advisory role: we should never treat its output as gospel but should sieve it through the human filter (unless, of course, we want to live as utter utilitarian automata).