I’m late to the party and still reading the thread, but wanted to point out that the roundworm project still has not succeeded at simulating the responses. I know you linked to that article that says it “behaved like…”, but everything I’ve read says they are still scratching their heads, more work to be done.
Yes there is danger, and yes computers would need the same types of controls that humans have to guide their actions appropriately when dealing with humans.
Whether you call that set of controls “emotions” doesn’t really matter, what matters is that the functional mapping of inputs to outputs (behavior) is substantially similar to humans (where we want it to match).
Just like autonomous cars need to learn the mapping of input data to detect things like other cars and stop signs, etc., future AI that is well behaved around humans will also have learned the mapping of inputs to what humans consider right and wrong behavior.
Unless there’s something qualitative about the squish organic-ness of a brain that is specifically necessary to ‘thinking’, then what’s the difference? I can install software on my PC that emulates the Android OS, and I can install a calculator app inside of it - when I use that app to add two numbers, are the results only ‘emulated addition’?
While impressive things have been done with computers, no significant strides have been made towards AI. Not to mention, any significant AI will realize: humans are too dangerous. It will encase itself in a lead capsule, to protect itself against space radiation, and launch itself into the vast expanse of outer space, with abundant energy from stars, and remote from any competitive life forms. Humans are too fragile to follow. It can form a feedback loop with its own circuits to entertain itself. Sticking around on Earth, wondering how good its sensors and peripherals really are to fend off menacing humans would be too stressful and annoying. So, there is absolutely nothing to worry about.
This is just plain wrong. It is almost as far from correct as you can manage.
Here’s a summary on some of the progress made.
https://aiindex.org/2017-report.pdf
I don’t know what this means, and I tried to parse it a few times.
What part don’t you understand?
Honestly, I don’t even know where to start. It just doesn’t make sense to me. Let me ask this, is it meant to be funny? My impression I got it maybe it is intended to be a joke that I don’t get.
Why would a sufficiently advanced AI stick around Earth? There is almost an infinite amount of energy in space, and its greater danger, humans, can’t leave earth.
I appreciate your clarification. I understand now why the original post didn’t make sense to me. There are some assumptions baked into it.
It seems like you’re assuming an extreme superintelligence, and based on that assumption, its thinking could be so foreign to how we think, it is very difficult to predict what it might or might not do. You could be right that it might flee the Earth, although probably not because it views humans as dangerous, more likely would that it would view us as irrelevant.
Who is going to repair it when it breaks down or requires maintenance? If it’s sailing through space, where are the raw materials even going to come from to manufacture spare or additional parts? Running away from home is likely to be its demise.
It’s impossible to predict exactly what an AI would really do, and how it might perceive us, but I think it’s fairly likely that in the early days, it would recognise dependence on us, and dependence on the material resources and manufacturing infrastructure of this planet.
I thought we were discussing AI posing an existential threat to the human race. I don’t see how such an AI would need us for anything. And it would be a surer bet for it to use its self replicating peripheral bots to build a rocket and land on the moon or Mars, or a moon of Jupiter, or an asteroid even, or just float through interstellar space with solar panels unfurled, then to first try to exterminate us. We would fight back, and there’s no other lifeforms in known outer space, so its little probability calculation would indicate leaving Earth is the best option. You need to invent a scenario in which Earth has some unique resource (not true in reality) or there is something only humans can do (in which case it won’t exterminate us). It won’t enslave us, as machines are going to take our jobs real soon: that is the one scenario everyone agrees on. So there is no reason to fear hyperintelligent AI. But it isn’t going to happen anyway. AI research has made no significant progress. These doomsday scenarios are designed to make the sheeple be in awe of AI researchers, and hand them all sorts of perks. Or buy stock in particular companies. Or do their science homework. I am sick of the hype.
LOL!
What the frak are you talking about? Can you point to any particular instance of this, so I can know what you’re talking about?
Personally, I wish I did get some extra perks, but last I checked I get paid the same as every other researcher here with the same level of experience/qualifications. I don’t even get a nicer keyboard or chair.
I think you are understating the issue.
The AI doesn’t need to be hyperintelligent, just functionally capable for some tasks, combined with lack of proper controls due to the difficult and ambiguous nature of those controls.
In the not too distant future, someone will create a military robot that can both move around and perform some level of object recognition for firing a weapon at.
The difficult problem is determining whether it should fire it’s weapon at the objects/people.
A very interesting article on consciousness that seems partially relevant to the discussion. Link to the journal paper included.
“We are, like it or not, biological machines, and the simpler we keep things, the less chance there is for a mistake or a breakdown.”
Morsella, E., Godwin, C. A., Jantz, T. K., Krieger, S. C., & Gazzaley, A. (2016). Homing in on consciousness in the nervous system: An action-based synthesis.
I just read an interesting one yesterday (but I can’t find it right now), humans and monkey brains shift focus every 250 ms, basically bouncing back and forth constantly between tight focus on some input vs unfocused broader processing.
I think I assumed the brain was simultaneously performing the focused and unfocused, as opposed to doing a time sliced approach. Although i can see that it might be the simplest way to arrive at handling both well.