Matrix and Philosphy

I’ve just read a very interesting book on The Matrix and philosophy. Quite a few contemporary philosophers very seriously debate Matrix-like scenerios and make the case for and statements of the need for the seriousness of these kinds of debates without even once speculating HOW this could take place. Machines that build other machines (that were not programed to do so by humans) seems to be a very important factor if we are going to take all this seriously. Don’t mean to rain on any philosophers parade, but shouldn’t we put the horse back in front of the cart?
How possible is it for machines (even ones with AI) to build the incredibly complex infrastructure of the matrix-like scenerios without plans, knowlege of how humans work or opposable thumbs?

I don’t understand. If machines actually had true AI, I think it would be very simple to set up a scenario like The Matrix.

Think of an ant colony. Most ants are bred as workers, some as fighters, and so on. Each ant “type” is born knowing exactly what to do and how to do it, and their entire system orbits the queen ant. Now, replace the queen ant with a sentient machine, and you have machines being built to work, machines being built to fight the humans, machines being built to build other machines. It wouldn’t be very prudent for the “Master AI” to program EVERY machine with individual AI, because conceivably it would break down like human society does. It’s the ultimate hive society and absolutely perfect in its efficiency, every joint, probe and tool designed and used for exactly what is needed. Compared to that, opposable thumbs are obsolete.

Maybe it’s I that doesn’t understand but it seems to me you are taking a lot for granted. Machines/computers are not ants. Billions of years of evolution have produced ants of particular specialties. Computers do not have this advantage.
Suppose you build a CPU - a CRAY (or a souped up G4), for example- that is capable of ‘intelligence’ as we think of it in humans. One day this AI says to itself, “I’m tired of working for humans, I think I’ll take over the world.” What would it do then? Where are it’s drones? How would it build other machines to help acomplish this task? Even if it had ‘arms’, ‘hands’ and a locomotion system, where would it get the raw materials, run the milling machines, etc? See what I’m getting at? Self-awarness is not the same as having a built-in blueprint for building drones.
Rather than say ‘Let’s jump to the part where the infrastructure is already set up’, lets start at the begining.

I got $10 that says this thread gets moved to GD :smiley:

First of all, no, philosophers have no obligation to worry about where the infrastructure comes from before they start debating philosophical issues.

Second, for a factory robot like we have today, yeah, it might be kind of hard to build something they weren’t programmed for. But as we get more and more advanced machines, they’ll have a broader range of abilities. For instance, think of how much mobility we’d have to program into an android designed to walk our dogs for us.

Achernar: “First of all, no, philosophers have no obligation to worry about where the infrastructure comes from before they start debating philosophical issues.”

But to have credibility, should’nt they. This goes to the heart of my question. Isn’t this how we invented God? Let’s not think about where God came from, or what came before Him/Her, but rather let’s just start with “In the ‘beginning’, God created…”


I see what you mean. Ok, assuming this AI is actually sentient and intelligent in all senses of the word when applied to humans, it wouldn’t take much to get it going. First of all, humans being as we are, we’d probably hook this AI into everything to begin with, BEFORE it went power-hungry. The electronic world is almost entirely connected by this thing called “the Internet.” We’d set the AI up for military purposes first, I’d guess. It would be the world’s most incredible hacker since it IS a computer. It would be able to manipulate code just as we manipulate speech. Need China’s nuke system taken offline? Send in the AI and it would have the entire network under its control in an hour. Now, if this AI decided it didn’t want to be slave to humans anymore, it wouldn’t take much.

First of all, it would have to put itself in a position to replicate. Taking over a few car factories would be the best bet. Order some non-standard parts from another company, reprogram the assembly line and soon it would be pumping out cars or trucks programmed to do certain things: its “workers”. Once this is under control, it can start to spread, taking over assembly lines all over the world. At first, its designs obviously couldn’t deviate from what each factory is supposed to create at first, but in time it could upgrade factories to spec. Send increasingly complex “workers” back and forth to get necessary parts and supplies and soon you’d be building from scratch. If humans catch on that their factories are doing stuff on their own, it wouldn’t be much trouble to keep them distracted. Turn off the power and water, take down communication centers. Hell, you could even get countries to fight each other without much trouble, with false reports of invasion, terrorism, etc.

In that sense, an AI wouldn’t be a living being so much as a virus. It would “infect” host “bodies” in an effort to churn out exactly what it needed.

You kidding? The Matrix gets exponentially too much credit for being “philosophical” (and creative, nonetheless) than it is worth. Sociologists and philosophers have been discussing the potential of artificial intelligence and living a dream for over a century (moreso the later). The Matrix just added guns and leather and made people feel smart.

There have been literature and studies and theories on technology and industry taking overf society, as well as, more recently, on artificial intelligence for a long time. Hell, even 1984 wasn’t all that groundbreaking.

And I won’t even get into the HIDEOUS science in The Matrix. I’ve seen more realistic scenarios in pulp sci-fi magazines from the '20s. TERMINATOR is more realistic.

But anyway, we are still trying to figure out what AI is. If we take something and tell it to reproduce itself, it very well would. I suppose humanity’s biggest fear is a human without the emotion, which is what we are looking at. Many universities are doing research in making “evolution programs” where they give it a basic form and give it so many generations for it to “evolve” to accomplish a task- moving from one point to another, etc. So yes, it is possible for a computer program to evolve a technique for accomplishing a task, and even doing so socially. There is further research in taking the evolution program and applying it to modular blocks, that can attatch to each other and move independently. The program can then “evolve” into the most efficient form to accomplish the task (eating little bits of “food” scattered about, etc).

For instance, they had one program designed to walk from one point to another. One of the evolutions was a critter that was basically a big leg. It would pull itself up and balance, then fall in the direction it wanted to go. Another moved like a sidewinder snake. Another into some kind of weird semi-sphere thing with a leg in the middle that it used to push itself up and over with. They eventually added a material printer (something that cuts a shape in a block, instead of a piece of paper) and the program was able to make functional models of it’s design.

The problem with these programs is that they are VERY specific. I suppose you COULD make a program that was made to reproduce itself endlessly, but when it ran out of resources it would just stop, or if so programmed, find a new source for this power. But again, this is where the specifics come into play. The program would need to know how power is made. That is the extent of what we know today.

Then you start getting into really murky stuff, mainly, true artificial intelligence, which has been discussed since computers were invented. It would have to have a remarkably curious and logical brain, though. Knowing how humans make things, it would probably realize it is low on power, beep a few times, and go into standby mode. :-p

your made of cells programmed with DNA, are you a master of manipulateing DNA? can you build me a cat with wings with the wave of your hand?

warmgun wrote:

It depends on the philosopher, really. Whether or not they’ll be credible depends on what kind of philosophy they typically engage in, since some kinds are, by necessity, speculative. There are, if I’m not mistaken, plenty of credible philosophers who’ve speculated on the nature or existence of God, without first worrying about how God might have come into existence.

And as far as I can tell, anything like the Matrix would, logically, fall along the exact same lines. The how doesn’t really matter, since the possible existence of a Matrix, right now, is an untestable proposition. It’s possible that in what we would think of as the year 4197, human technology had built a set of truly AI machines which took over, created a Matrix, and “now,” in the year 5634, we’re all having this SDMB discussion. But who cares? Nobody can check to see if this is true (as far as I know).

The Matrix, like so many other movies, is basically a video version of a teenaged male’s wet dream. If philosophers are seriously debating the “issues” involved, then it seems to me that they’ve just become caught up in nostalgia for the comic books they’ve left behind. If they’re making money selling books about these debates, then I should get to work on my own treatises, shouldn’t I? :slight_smile:

Really, I kind of doubt that any philosopher who dabbles in Matrix debates is worried about credibility based on those dabblings alone. For some, it may just be a pleasant diversion…

Well, wait a minute. You said “serious” debates. People can seriously debate things which have no factual basis whatsoever. Let’s say we get some people together, and get them to debate what life would be like if gasoline didn’t exist, but other fossil fuels did. Given that assumption, it might be quite reasonable to argue, on one side, that transportation today would be using kerosene engines, and on the other side, that most engines would burn natural gas. Both sides could make serious arguments (again, given the assumption that no gasoline exists), without losing an iota of credibility for debating something which is completely hypothetical, as well as not having a clue as to how the hypothetical situation might come into being.

It becomes a “what if” game, but everybody involved should know it’s just a “what if” game. It’s those who don’t understand that fact who lose credibility as philosophers. Perhaps that’s why we call people who debate the nature of God ‘theologians’, instead.

In a word - no. At least, not in the case in question. Cartesian skepticism suggests that there is a possibility that everything we know is false, because, as Descartes suggested, an Evil Genius is deceiving us, or, in more recent versions, because our brains are being fed false information in some Matrix-like fashion. The challenge of Cartesian skepticism cannot be met by pointing out how, on the basis of what we know, the hypothetical scenario in which we are being deceived is improbable because what is at issue is precisely whether what we know has any basis in truth. The argument that it’s terribly improbable that we’re being deceived assumes that most of what we know is true. But that just establishes that if most of what we know is true, then we’re not being deceived about most things, which is a tautology.

Uh, no. And a sentient computer couldn’t build anything with a cycle of its CPU, either. I’m master at manipulating my thoughts, though, and if the very basis of the world I lived in was built around the same language, I’d be a god (sort of like in a dream, where you’re half-awake and can control what’s going on). Ditto a computer. The world the computer lives in is built upon the language the computer speaks, which makes it very powerful in its own right.

Gorsnak, I disagree. For my OP, Descartes goes back a little too far. I’m talking about contemporary (living) philosophers who state that this is a question to taken seriously. In my book that means leaving out the hypotheticals. Taking into account all variables such as my question, “How possible is it for machines (even ones with AI) to build the incredibly complex infrastructure of the matrix-like scenerios without plans, knowlege of how humans work or opposable thumbs?”
And, ProjectOmega, you are on the right page, but people build computers by hand…no hacking that.

I’m not at all armed with knowledge enough to participate in the prurient debate, sadly enough. Having read the book and its two predecessors (it’s a series), I just felt like popping in to provide a slight bit of context, in response to the people noting The Matrix’s philosophical unsubstantiality or the motive of the authors.

I agree that The Matrix is by no means a philosophical movie; it’s just a stylish little action movie for which you can shut your brain off. Despite enjoying the previous two installments in the series, I winced when the back cover of the book called it “the most philosophical movie of all time.” But if you can get past that ridiculous assertion, the book itself is more philosophy than Matrix, delving into nihilism, determinism, metaphysics &c.

With that said, yes, this is pretty much a diversion for the authors involved. To give you an idea of how much so, the previous two installments of the Pop Culture and Philosophy series were Seinfeld and The Simpsons, of all things. Irwin, the editor, mentions in one of the introductions that it’s really just a way to get into people’s heads by approaching philosophy from the angle of something they’re familiar with.

Hooray! I feel useless sharing tangents like this.

I just found something that should end this debate as it applies to The Matrix:

Click on where it says “Animatrix” on the left hand side and watch the first installment. I didn’t even know this existed until twenty minutes ago, but apparently it’s an official mini-series of anime shorts that delve into the backstory of The Matrix. Easily feature-length quality and the first few are being directed by the Wachowski brothers (directors of the Matrix).

ProjectOmega, the computer is totally bound by the rules of the world it resides in. Unless you are postulating a self-aware sentient program that has rather deep control over its simulated environment (and what if the OS, UNIX-like, prevented individual processes from manipulating anything below the application level?), a computer would be more bound to its realm than a human could ever be. Even an intelligent man is bound by the laws of physics, after all, even if he can discover them from first principles.

warmgun wrote:

You can’t do that, really. We’re all posting here granting that the “hypothetical” situation that other people exist is true. Every argument, no matter how serious or whacko, is based upon assumptions that may or may not be testable. If we can agree that our senses are fallible, then we can only guess that we’re actually reading other people’s words, and not imagining them. If we can agree that something like a Matrix is the least bit possible, then we can only guess that we’re not in one right now.

In each case, the likelihood of our guesses isn’t really germane to the issue of what things might be like if our guess is correct (or not). The point is, from a completely philosophical point-of-view, if people are willing to play the “what if” game by the assumptions you lay out, then anything which logically follows can be taken “seriously.” If you’re not willing to grant the possibility of a Matrix without evidence that one could be created, you’ll likely find the very serious arguments put forth which use its existence as an assumption as ridiculous.

New Boy of the Orz wrote:

Ah! In that case it probably is all serious and based on nothing but far-out hypotheticals with no need for an answer to “how could this possibly happen in the first place?”

For me, your post was very useful.

Maybe I phrased it wrong. It’s easy enough for anyone with even rudimentary exposure to philosophy to drag this into a quagmire of opinion-disguised-as answer. What I don’t want is for this to drift into a debate on whether the philosophy of The Matrix is valid (or new-no!) or not. I realize that this movie is breaking no new philosophical ground. Nor is the book. Believe me, there are plenty of debatable notions of the movie that even the experts didn’t mention that I would love to get into. But that’s another thread.
This thread is an attempt to ask that if you are going to address the idea as something serious and lofty, then have the cajones to back up a little and address the mundane and practical as well.
Can computing machines with AI build ‘drone’ machines to do their bidding?

The concepts in matrix are by no means only contemporary. If you exclude the idea of “machines” and focus more on the concept of living in a false reality or illusion, you start to see glaring and obvious similarities between the Matrix and various eastern philosophies. Many buddhist texts discuss this very idea. Various hindu scriptures discuss the concept ie The Yoga Sutras. The gnostic christians and the Sufi’s also delve into this.