Matrix and Philosphy

Extremely improbable. Most people have this strange misconception that an Artificial Intelligence will be somehow vastly superior to us humans and seem to have unlimited capabilities.

The truth is it won’t. There is nothing mysterious about an AI that will make it better than us.

In fact, in all likelihood, you’d be lucky if you can get it on the level of a dog.

And who says it will be nonspecialized AI, anyway? I think we’ll have really advanced expert systems well before we’ll have anything we’d consider human. Consider an example:

TMAC is Traffic Monitoring And Control. Its only purpose is to prevent traffic problems in Los Angeles and the Greater LA Area. It receives input from thousands of cameras mounted on lightpoles, weight sensors built into the roadway, and lasers placed just above ground level, all of them scattered throughout its service zone to provide it a comprehensive picure of the rather large amount of traffic it controls. TMAC controls the operation of every traffic light, railroad crossing, traffic report radio station, and traffic police hotline in its service zone, and is programmed to use them to maximise traffic flow and minimise slowdowns and stoppages. Emergency subroutines monitor for extremely unusual patterns that may indicate a crash, and can alert police when its suspicions are aroused. It is intelligent enough to create safe traffic flow patterns around construction and high-speed chases, and can even alter light timings to slow down traffic when the NOAA issues certain kinds of warnings.

TMAC is genius-level in its domain, traffic control, but it is not going to take over the world. Why? Because it is not generalized. It has no possibility of becoming generalized. Its entire universe is composed of traffic patterns, and its entire existence revolves around modifying traffic patterns to fit predetermined goals. In fact, referring to it as a genius or an idiot or a savant is deeply wrong: It is a well-written process running on a highly efficient machine, nothing more.

Or take Mark 42, the Universe-maker. It has the best physics model ever created by humanity, and it uses that software to create an absolutely perfect replication of the physical world. A human, plugged into this wonderous engine, would be unable to distinguish its construct from the real world. But Mark 42 is just as circumscribed as TMAC is: It only knows how to simulate the physical world. It has no conception of life as such, only certain actions being applied to certain objects that require certain rules (gravity, inertia, etc.) to be applied. Mark 42 can be asked to do anything, from modelling the Big Bang (thanks to the breakthrough work of Dr. Hawking, whose head is being kept alive by the mad geinuses in GQ :D) to the Big Crunch (or the Gnab Gib) to the ever-popular Universe where the Los Angeles Yankees won the 2015 World Series against the Tokyo Chrysanthemums, except do something like enslave all of mankind in an endless replay of the late 20th Century. Again, that would be outside its programming.

The point I’m trying to make is thus: Even if we do make really advanced AI, we would probably not make it human and, therefore, not make it any competition to us.

Piffle and balderdash.
If we can make an AI as intelligent as a dog, after two years we will have one as intelligent as a man. After another fifty years we will have one as intelligent as a woman, then ten years after thay they will be a hundred, a thousand times as intelligent as any human, Eventually we will convert all avaiable solid matter in the solar system into intelligent matter.

[URL=http://www.orionsarm.com/whitepapers/Brains2.pdf]The Physics of Intelligent SuperObjects: Daily Life among the Jupiter Brains{/URL]

Sorry.The Physics of Intelligent SuperObjects: Daily Life among the Jupiter Brains
This will be possible because electronic brains can process thousands of times fater thanhumans and be made millind .billions, quintillions times bigger. Now most philosophers cannot grasp that sort of hing, but Prof Bostrom can get his head round it.

quote (Derleth)
‘The point I’m trying to make is thus: Even if we do make really advanced AI, we would probably not make it human and, therefore, not make it any competition to us.’

Absolutely. The systems will not need to be anything like a human. However the most advanced wil be able to simulate a human with perfect accuracy if necessary…
and the most advanced computer one year will be superceded by another ten times as proficient…
they will be able, if we allow them, to develop their own goals an self design until they are incomprehensible to any single human mind-
and quite possibly even if we do not allow them they will do it anyway.
Lamarkism if you like- acquired characteristics, exponentiating to the theoretical limits.
Eventually the solid matter of the universe will be designed by compters into their own likeness.

This is all true, the Matrix wasn’t exactly original. BUT, I see it as a good thing that I was able to see an action film with my younger brother and come out of the theatre talking about Brain-in-a-Vat skepticism. Since I saw it over a break while I was taking Contemporary Philosophy, and had just read a couple articles about such skeptical hypothesis (including the one by Hilary Putnam quoted at length in the philosophy section at the Matrix website) I found it stimulating. What the film did was bring a little bit of deeper thought about philosophy to the audience for action films, not exactly a typically deep-thinking audience. While it wasn’t anything new or original as a thought, it was new and original in popular film (let’s face it, nobody watched Dark City), so it brought ideas to the masses who never took a college course on philosophy. So I give The Matrix credit for adding an interesting bit of philosophical musing into an action film, whereas the average lousy braindead action film (i.e. True Lies) just lets the viewer vege out and stare at exploding things and fire for 90 minutes.

(Look, sorry about the typos, by the way, I am used to an edit function - but this marvellous board is no doubt too busy to allow people to continually edit posts)

The Brain in a Vat argument, mentioned by RexDart, is interesting - to me it shows how professional philosophers can come to a completely reasonable but wrong conclusion.
Putnam and others say that a BIAV cannot refer truthfully to any external object, and cannot say ‘I am a BIAV’ without it being false.
C**p.
If you were a brain in a vat you could say what you liked. A digitised external object would be just the same as the real thing, simulated down to the quantum level if necessary.
The only thing you can’t say truthfully is I am NOT a BIAV. because you will never know unless the simulating entity permits it.
So professional philosophy has got the answer 180 degrees wrong.

Indeed, and it goes further than this. I believe the Brain In A Vat is used by “professional” philosophers (absurdans reductio’d for $50!) to illustrate the petitio principii nature of philosophy, logic and mathematics (ie. they are “based on themselves”).

It could be the case that reason, philosophy, logic, mathematics, indeed any epistemiology, are merely systems of nonsense being fed to us by our “programmers”, when in reality (outside the jar) they are utter bunkum. (eg. Those outside the jar laugh at the poor brain’s inability to grasp that it exists and does not exist simultaneously.)

Hence one cannot know the “system” outside the jar and cannot be sure of any statement’s truth or falsity since even these qualities might be meaningless outside the jar.

And, since this scenario cannot be falsified, we must take it that the non-Matrix universe exists on faith.

Since it cannot be falsified, we can ignore the whole concept and carry on as if we were ‘real’.
Just hope the simulation doesn’t exceed its budget and get cancelled.

Agreed. (I was just observing that the point of the film was that the hero refused this approach.)

What is the theoretical basis of the so called “intelligent matter?”

Pah.

Intelligent matter is any matter which is capable of processing information…your computer contains intelligent matter, your spinal reflex arcs contain inteligent matter, your skull contains intelligent matter…
the level of intelligence depends on the sophistication of the program…
sometimes this class of matter is called computronium, especially when it is nonbiological in composition;
although living computronium is most often part of an organism’s nervous system it does not have to be.

cites
1
2
3
4
5

Errm, as much as I admire and enjoy your enthusiasm, eburacum, I feel you should point out that “orionsarm” (the site you are repeatedly referencing) is a science fiction site, not a peer reviewed journal or such like. Referring to “intelligent matter” is therefore comparable to referring to the “pure magic” of Terry Pratchett’s Discworld.

Incidentally, [nitpick] I believe Total Recall was based on the work of Piers Anthony, not Philip K. Dick. [/nitpick]

Yes, and the Matrix is a science fiction movie… nevertheless, the other references arte from equally mad humans who are interested in the development of intelligent matter for their own nefarious ends
and I suggest Mr Anthony may have been influenced by Mr Dick (the writer not the Dickens character)

This is a common misunderstanding – the novelization of Total Recall was by Piers Anthony. That is, his work was based on the script. The script was based on Phillip K. Dick’s “We Can Remember It For You Wholesale”.

Humble pie duly eaten. :slight_smile:

This is certainly true, but a properly constructed Matrix scenario reality could have different rules modelled by the program. It could be arranged that time could run backwards for short periods, or apparent entropy could decrease, or the number of physical dimensions could be increased
or humanity could be given the power of flight, or anything you could imagine (in fact the Matrix film is somewhat conservative in that repect)
And what is the deal with the Human battery nonsense? It would be much more efficient to use the biomass as fuel directly.
Or even to use geothermal, tidal, solar power- whatever. **
[/QUOTE]

I like this idea because nesting realities allows for workarounds to things like superluminal travel. In the nested reality level(s), fundamental laws of physics can be changed just by applying a ‘cheat code’ or a little reprogramming. What if we aren’t a part of a ‘basement level’ reality, which would be the only one where laws of physics are (presumably) immutable.