Hamsters self-aware?
We’ll never hear the end of it. The original Hamster Dance ad nausem is on par with Badger, Badger.
Hamsters self-aware?
We’ll never hear the end of it. The original Hamster Dance ad nausem is on par with Badger, Badger.
Just thought I’d add David Brin’s Earth to the list of developing-overmind stories.
This is an interesting idea.
Firstly computer programs which play chess have become world class. However they don’t think in the same way as humans do.
There are two main types of programs.
One lot generate millions of positions, covering all possible moves + continuations from a starting position. They then do a simple evaluation of each position, and select a move that guarantees the best result given best play by the opponent. Obviously there is calculation, but each move (no matter how apparently absurd) is solemnly taken as seriously as any other.
The other lot of programs take a small number of pieces (currently 5, but I think they’re working on 6) and generate all legal positions with this material. They then sort the positions into a logical order (checkmate; one move from checkmate; two moves from checkmate; etc) and compile a database. The program can then recognise any position with this material and simply follow each position to the best possible result. Absolutely no calculation at all. The computer has already worked out the answer and just looks it up. (Once computers have enough power to do this with 32 pieces, chess will be completely analysed.)
You couldn’t describe either of the above as having consciousness (but they are very good at what they do).
Secondly the reason that Skynet was dangerous was that it had access to nuclear weapon launches.
I’m afraid the best the SDMB lifeform will be able to do is Pit us carbon-based life forms!
As I said, you could make a rather similar statement about brain cells, if you were studying them individually and were not able to be aware of the collective result of their operation.
I’m no expert, but I am a computer programmer and have done enough research into A.I. to write a really poor sci fi novel on this subject.
With the caveat that nothing is absolutely impossible, consciousness evolving spontaneously in computers, as they exist today, is so unlikely that it is best considered fantasy.
Let’s not forget the efforts of many, many brilliant minds who are attempting to create strong A.I. intentionally, and have been for many years, with very little success.
While the novel I wrote had a certain amount of accident involved in the process, the main goal of the human characters was to create believable A.I. using specialized hardware and software. The software end was specifically designed to modify itself and the hardware was specifically designed to use massively parallel processing to imitate the non-serial thought processes of biological creatures.
Even with those crutches, I felt at the end that the development of self awareness in the program without the knowledge of its creators was still a huge conceit in the story.
Here I am, brain the size of a planet and they ask me to take the thread down to the pit.
Call that job satisfaction?
'Cos I don’t."
This just blew my mind…
I have never heard a postulate like that.
Awesome.
How do you know it hasn’t already happened?
Moderator’s Note: Computer–emergency command Moderator overrride: calculate pi to its final digit!
OK, thank God, that did it. If anyone wants to start a new thread that isn’t 3+ years old, link to this, etc., please feel free.