View Single Post
Old 01-27-2018, 12:37 PM
SamuelA is offline
Join Date: Feb 2017
Posts: 3,903
Originally Posted by Tripler View Post
I care about any "how." There are not multiple converging paths, and the future is infinitely disparate from what we think it is. "They" [the paths] do not necessarily lead there. I'm looking to find your evidence on why you think they do.

How do you make intelligence systems use this method? I understand your ends, but with what ways and means do you intend to affect this change?

The information given to the machine is only as good as the person giving that information. GIGO. Your ideal machines are prone to hacking.

I'm sorry but if you don't know how we get from "A" to "B", then your argument is moot; you're just postulating a utopian society without any evidence to back it up.

Open ears.
Ok, I'm a little confused now. What cites do you need? Do I need to link the lectures in Udacity or one of the other AI training sites, or the papers by Google, or what? This stuff is all very new and cutting edge. Everything I said works or will work Real Soon Now. Including planning agents that can model nanotechnology.

What are you talking about by "hacking"? Or "giving information to the machine?"

That's not what reinforcement learning is. Humans build the plumbing but the reason the machine would "know" a bag of chips crumples because it has subsystems that do that and those subsystems figured it out from observation.

A simple one would just have a neural network that takes the output from the classifiers. That's the module that looks at the camera feed and labels the different parts of the image. Like "chip bag".

Other subsystems would reconstruct the geometry from a mixture of stereo cameras and lidar.

And those subsystems feed into a simulator. That's a neural network that predicts the new state of the system. It would have weights and would predict that the future state of the chip bag, post pressure, is pressed inward more, with geometry distortions predicted by these numbers that were found from the data.

It's a very complex topic to be honest. I can't really do it justice. I just "know" we can get these pieces to work extremely well, and to build agents that do more complex tasks. And there's hundreds of billions of dollar being poured into it.

I also "know" that the problem I have described : various common objects inside a robotic test cell, with several robotic arms and a defined goal that requires the machine to "invent" a rube goldberg machine to accomplish the task, is the type of problem that is very solvable with the current state of the art.