What arguments are not convincing?
The problem is, people here believe I try to convince them about something. I am not. I am just asking questions, so the stereotypes fall by themselves, by its own internal contradictions..
What arguments are not convincing?
The problem is, people here believe I try to convince them about something. I am not. I am just asking questions, so the stereotypes fall by themselves, by its own internal contradictions..
I also have an MSc, fellow :rolleyes:… It allows me to make classes at an U. :dubious: I don’t consider myself a scientists, though, but a practical man, closer to a plumber, a carpenter or a mechanics, than to Gallileo or Newton.
Every argument you have ever made on this board is unconvincing
But he’s just asking questions.
This is interesting. So mathematicians are like computers. Both at programmed to think logically and use the language in specific ways: C++ or Java, for instance.
:rolleyes: (By the way, I am a computer plummer)
Computers process, they don’t think. They can’t approach any problem from a new direction, or do anything that hasn’t been done before.
Mathematicians can (and do) do all things. Eventually, a programmer will code up a mathematician’s novel solution to a problem or new algorithm, but the original work always has to be done by people.
Indeed. We have failed to built artificial mathematicians… so far.
So, the internet is really a series of tubes?
But we have successfully built successful question-askers; almost 50 years ago, “Eliza” (ELIZA - Wikipedia) was able to hold a conversation, if by conversation, one meant ask questions loosely based on previous replies. Apparently creating a computer program that does creative science or math is a lot harder than creating one that asks questions.
But elisa is a toy. Something like those gadgets Alexandrian scientists made to impress the superstitious masses of Egypt. So far, A.I. has only been another way to call moder puppeteers.
Hal isn’t here as yet, no matter A.I. prophets have predicted its comming since the 50s
No. Not frozen.
Any Ys? Actually, I kind of like the modified spelling. I might start using it regularly. Heck, why not get rid of that pesky p while we’re at it?
I agree - as I said, all it could do was ask questions vaguely related to the topic at hand, and drop in occasional non-sequitors. Good enough for a message board, but not for science or math.
Yes, but game playing is just a form of problem solving on which computers perform well. We should remember that computers are special programmed calculators. So, computers calculate very well, and faster than humans.
Computers fail in common sense problems. In those kind of actions human do fast and without even thinking much on it, but that can overload a computer.
No, the antisemitic rant would accuse them of thinking they’re “chosen,” “superior,” and “alephs.”
Uh, Watson does a lot of “common sense” to get the correct answers. It was mostly hard coded, but it was there. Until recently we were missing a theory of the brain so the AI people could at least had figured out what they were doing wrong all these years.
I think we are now getting close to solve the issue of “common sense” and the best approach I have seen is to use constant feedback and pattern recognition, I agree with Jeff Hawkins that a lot of what one could call common sense are patterns stored in memory that are reused, many times by not using all the possible neurons on a wasteful and time consuming way.
One related example is shown by the surprising skill demonstrated by even robotic devises that rely on mostly feedback and some programming.
Compare that to the robot arms that got more publicity recently that folded towels, it seems the approach used on that slowpoke was to painfully program all the possibilities and the computer calculated the proper solution. It works, but the unpractical slowness points to the brute force style that has hampered not only robotic, but AI.
I believe that soon we will find ways to integrate continuous feedback mechanisms into systems that will properly deal with disparate information and situations that many assume only humans could deal with, we will create intelligent machines.
Perhaps you are right this time, with this approach to common sense, but allowed me to doubt.
In fact, the promisses of A.I. and the failures that followed are famous, and becoming legendary.
Perceptrons, semantic networks, prolog machines, neural networks and many other approaches claimmed they would build “intelligent” machines… And there always was people ensuring us Hal 9000 was just around the corner. And time has passed and we continue with machines as smart as in 1950! Yes, some toys are sophisticated, but hardly surpass the state of expensive puppets.
Let’s hope you are right this time, but let me to be skeptical.
Missed the edit, here is the other robot that I was referring to:
That folding towels exercise is what IMHO takes place by ignoring tactile data and other feed-backs, the makers of that robot had to rely on visual clues to do the job.
While not 100% related to AI, those robots are a metaphor on what has happened to the AI field, the slow robot has to infer a lot just by using visual clues. In the AI world most of the approaches to simulate human intelligence were and are missing several layers of cues and feed-backs that are second nature to us humans. (This being a metaphor one has to point out that the slow robot was missing tactile information because the researchers were concentrating just on visual geometric cues.)
It is by combining systems that record those feed-backs as patterns and adding a memory prediction framework to recognize those patterns, that will IMHO give us strong AI in the near future.
Interesting. I’ll take a look.