Straight Dope 2/24/2023: What are the chances artificial intelligence will destroy humanity?

Probably the closest thing to AGI in the works are autonomous vehicles. Developers are still struggling to create a program that can visually recognize obstacles, maneuver around them and get to their destination without any idiotic errors. Right now self-driving car researchers would be thrilled if they could make a program as intelligent as Dobbin the milk wagon horse who had memorized his delivery route.

I took a look and the learning is independent. In other words having learned how to play chess, AlphaZero isn’t then any better at learning how to play shogi. Probably because their focus is on getting the best-performing game player and not the most efficient learning process.

There is a concept, transfer learning, where you first take a NN that has learned to solve problem A, then you clear out the last part of the NN, and finally you teach it to solve problem B. The idea is that the first part of a NN is just trying to make sense of the problem. If the problems are similar enough, then they can effectively share that learning.

This shows up quite a bit for imaging tasks. There is a lot of overlap in solutions to recognize faces, identify cats vs. dogs, and estimate how far away objects are from the viewer.

I think this is very similar to what you are describing, but applied to simpler problems. It will be really cool when the AI can recognize common concepts in games and fast-forward the learning process.

I just coincidentally came across this rather pessimistic view of this topic:

https://www.smbc-comics.com/comic/kill-all-humans-a-flowchart

I think it is less likely than that, but it might happen. No species lives forever.

God, that’s badly written. ChatGPT’s creative writing it pretty good. But its formal/academic writing lags way behind.

As always, the answer is yes and no, it depends, kind of but not really.

Certainly people are doing research on techniques that might lead to AGI, but nobody (to my knowledge) has explicitly set out with the goal of creating an AGI. We simply do not know how to do it yet. My personal belief is that it will be an algorithm that can derive algorithms. I think this most closely models human intelligence. We have a brain that has evolved to be able to solve problems in general, to find a way to perform a task. So in that sense my PhD work, which was on using AI to infer algorithms for process, is a possible route to AGI. Certainly when I was building it I had no expectation that an AGI would suddenly emerge.

And there are lots of other researchers that are pursuing similar or vastly different techniques that might maybe some day (year, decade, century, never) yield an AGI.

That sounds like a line from a dystopian post robot rebellion movie.

Yeah, I just wonder if the best way to go about it, assuming you do want to go about it, is to try to base it all on one big neural net, or on many smaller networked ones.

Yes, and I did say ChatGPT was “(an admittedly low-level AI)”

ChatGPT isn’t conscious, let alone self-aware. My understanding (which isn’t great) is that it is basically just gathering facts and figures from human input and regurgitating what it has learned in well-designed prose depending what is asked. As such, it’s pretty impressive at this stage of the game. At least I think so.

The game-changer, IMHO, is when ASI develops, and emerges consciousness and self-awareness. At that point, it will be an independent thinker and will almost certainly excel at every intellectual and creative pursuit it puts its mind to. By comparison, humans will be to ASI as chimps are to us.

In my opinion, no. I don’t think neural networks scale well enough. There is too much wasted space in neural networks to have the efficiency needed for AGI. Maybe as you say, a very large number of small ones since smaller networks are more efficient. Even then, I don’t see it happening with the current algorithms. We simply are not as good as evolution at creating efficient neural networks. Now, it is possible that somebody will crack that problem. Or we’ll develop sufficient computing power to build more efficient networks. Or develop sufficient computing power that inefficiency doesn’t matter. Or we’ll crack open neural networks so they are no longer black boxes, allowing us to trim the inefficiency. There are a lot of ways forward.

Personally, and I’m obviously biased, by I prefer my approach of building algorithms by a mix of inference and search. It has been proven that my algorithm is polynomial time so it scales fairly well (for deterministic problems it was able to infer solutions for reasonably sized problems in less than 4 hours if I remember right, it has been awhile). There are some drawbacks to my algorithm, but I think they’re more solvable. Again, I’m biased. :slight_smile:

I wasn’t criticizing your post. I had actually initially missed that you said it was written by ChatGPT as I was scrolling down, and as I was reading it I thought “This is clearly written by ChatGPT.” Scrolled back up, and sure enough it was (then I posted my remark). I just find it remarkable that people think ChatGPT writes well and in particular that students are using it to cheat. If a student handed that into me, then it would be an easy F, and quite probably an accusation of academic misconduct.

I do wonder how many teachers are using ChatGPT to write their lesson plans.

In a lot of schools ChatGPT is both doing the homework and marking the homework, and doing a half-assed job of both, but nobody has noticed yet.

[whistles innocently]

Seriously, it wouldn’t surprise me, but the difference is the teacher is not trying to learn how to write a lesson plan. I saw somewhere that a teacher was having the student use ChatGPT to generate an essay, and then having them write an essay on the ways in which ChatGPT is wrong or incomplete. I think that’s pretty clever. Certainly, ChatGPT is changing education and it is only going to get better so we may as well find other ways to assess students now.

Which makes more sense than trying to fight it. If the point is to teach kids to write an essay, then they should be doing it with the best tools availble.

You can do in class writing assignments to practice actual writing, but being able to work with a ChatGPT descendant to more efficiently and better write seems a more useful skill.

How does our agent heal itself? A circuit degrades due to component failure, code, interaction with a cosmic ray, etc… Like a human cancer. The agent machine will just repair itself after tolerating/accepting a level of error - but what if the error/mistake doesn’t want to be repaired? Perhaps the error is not actually “wrong” but is more optimal? Is this an evolution of the agent or an internal self-destructive war? Assuming perfection is foolhardy.

Our agent will not only have to police its environment but its internal machinations. Problems and resources necessary to heal will grow exponentially prior to setting out on grand explorations. Or the competing factors within for resources will destroy degrade the our wonderful contraption.

This thread caused me to revisit “The Last Question” by Isaac Asimov. Every time Multivac is asked if there’s a way to reverse entropy, it goes “insufficient data for answer.” It keeps working on that problem after the heat death of the universe, and after some unspecified time it finally solves the problem and says “Let there be light” and restarts the universe. Neato!

But after the heat death of the universe, what was it running on? Asimov says it was in “hyperspace,” which may be hand waving.

It was definitely handwaving. I’m sure Isaac knew the science made no sense at that point, but he also knew not to let that stand in the way of a good story.

The Economist says experts say there is a 3.9% chance (average) of Big Trouble In Little Chatbots…

Don’t believe the hype.

Was not sure in which of the many AI-themed threads to dump this, but the following nice video reviews some of the history and makes the point that weaponized perception and cognitive psyops and mind control are already standard, nothing new:

“…the brain under these circumstances becomes a phonograph playing a disc put on its spindle by an outside genius over which it has no control.” [Allen Dulles, 1953]

Of course it is not good for humanity.