How far can programming go?

See the way I look at it, the computer is simply a machine. When you type in code, what you are doing is effectively entering instructions that tell this machine what to do.

That’s it. Period.

Now obviously this viewpoint is a little short-sighted. This type of technology enables us to do many different things. The various sorts of applications available on the market are a testament to what it can be used for.

However, how far down the rabbit-hole can the programming revolution go?

In the same way that I’ve missed the point (in the beginning) about just what these sorts of technologies can do, is there some greater utility that has yet to be discovered? I mean how far can we go with this thing?

All I see are machines carrying out what are (mostly) a basic set of instructions and executing them on command. But maybe others think differently. What are the limits that you see this type of stuff doing? What is it possible to do? What can you conceive of in the near or very-distant future with respect to programming and programmable devices?

Tell me your thoughts, what you’ve read and what you’ve heard. Where you think we’re all going with this.

Barring some kind of AI, all that the code will ever do is what it’s programmed to do.

I once sketched out a program that, given infinite time and memory, would output every theorem from every set of axioms in the first-order predicate calculus. Every mathematician who’s ever lived was trying to duplicate the interesting results of what this program would do.

I don’t think, at least with any hardware model and associated programming paradigm known to us, we’ll ever go “down the rabbit hole” and greatly expand the types of problems that can be solved.

I think faster hardware will allow us to find new applications by making things feasible by sheer brute force, but I don’t ever see an “Artificial Intelligence” becoming a reality.

It’s interesting to note that the things that springs to mind when people hear “artificial intelligence,” like being able to process human language well enough to fool a human, are shunned by mainstream AI researchers has being the domain of crackpots.

Yeah, the whole issue is whether or not true intelligence is possible on a machine. Most of our current sucessess in the AI have to do with increasing a computers ability to understand stuff. But it’s mostly just giving it a really good taxonomy and set of relationships so it can combine existing concepts into new objects it hasn’t been exposed to, but still build off of taught concepts .

If a computer can actually made to think, which in my mind is to understand a goal and create new thoughts to achive that goal, is still widly debated, and largley dismissed. The interesting thing in my mind is that I wonder if we only need to breach the threshold of ‘true intelligence’. At that point it could understand the concept of ‘improve yourself’, which it could experiment with at teraflops speed and make the rest of the way to high intelligence quickly.

IMO, the notion that computers can’t really be intelligent implies that there’s Something Special about human intelligence. That view seems to be getting harder and harder to hold as we learn more about how our brains are put together and how other animals think.

Can you point to any specific discoveries or insights into how our minds work that would make even a very simple artificial intelligence (capable of learning, planning, basic problem solving–completely ignoring emotion) easier to implement? Understanding how a biological system works doesn’t mean it will be possible to emulate it on any hardware platform currently available.

I can’t say anything specific about hardware models, but I read wolfman’s post as saying that experts think that AI is impossible in general without reference to a specific hardware/software paradigm. That’s what I was responding to.

I didn’t see the OP asking about artificial intelligence, per se, but rather about how far can computer programming go.

I look at it this way: Is an insect intelligent? Or does it simply have a very complex set of instructions that govern what it does, when it does it, and how it responds to various internal and external stimulii? I mean, a bee has this basic instruction that says, “Fly around until the scent detectors are stimulated by molecules of a certain type, change direction of flight such that the scent detectors get more stimulation, when an object is eventually found (the flower), crawl around until certain substances are found, fly back along the original route to the hive, perform certain actions that will relay the pertinent information about the found source of the desired substance to the other bees, deposit the substance at some prescribed location. Repeat.”

That’s highly simplified, but the basic point is that you don’t really need “intelligence” to be an insect. A bee doesn’t think, “It’s time to go look for nectar”, or “I don’t really feel like flying aorund right now, I think I’ll just kick back and watch some TV.”

I believe that it is well within the abilities of current software technology to write a program for an artificial bee that would be indistinguishable – at least as far as behavior goes – from a real bee. The reasons that an artificial bee couldn’t be made today are hardware limitations, not software-related. We don’t have anything that can store a large enough program in something the size of a bee, and we don’t have the motors and servors and power source that could make something that small.

Moving up from insects, I would ask: Is a cat really intelligent? Or does it just have an extremely complex set of instructions that take as input lots and lots of parameters (the state of the cat’s stomach, what’s coming into its eyes, its ears, its nose, etc.) and then govern the cat’s behavior? Cats certainly learn, but we’ve had “learning” computer programs for years, programs that change their outputs based on changing inputs.

A cat learns that if it’s outside and wants to come into the house, if it meows, the big creature on two legs will open the door and let it in. But if you’re not home, how long will the cat sit there meowing? Does it have the intelligence to conclude that, after two minutes of no response, the two-legged creature is not coming? Or does the “meow to get in” program just keep running and running until something else (hunger, sight of a bird, a dog in the yard) over-rides it?

I believe that, theoretically at least, we could write an artificial cat program. Obviously, it would be huge, gigantic, and more complex than any program ever written, and would take who know how many hundreds of software engineers and decades to write. But, as relates to the OP, I don’t think it is necessarily beyond the theoretical limits of programming.

So I’d say that we’ve got a long, long way to go before computer program hits any kind of limits. I think what will always limit us will be the hardware, not the software.

A Master program suite or system could be built that via trending could start determining what is more relevant and more in demand. Something rudimentarily like this is happening at Google. In theory the Computer network will learn how to write code to use for quick and more relevant searches without Programmers. If it is successful, this would be the basis of a more powerful master system that would grow in sophistication slowly over time as hardware improved and statistical databases on users accumulate.
Eventually the computer system would code its own improvements.
If by AI you mean self-aware, that is too philosophical for me, but if you mean self correcting and coding. I think it will be possible someday.

I suppose it depends on what you think intelligence is, in the first place. Personally, I think that human intelligence is just the byproduct of a very complicated machine with incredibly specialized pattern matching capabilities. I would also claim that any system capable of processing information and producing an output indistinguishable from that of a human is by definition intelligent. From that point of view, I have no doubt that we will eventually be able to create artificial intelligence.

To continue Roadfood’s example, if we can write the software that powers a cat, I believe we could also write the software that powers a human (eventually). Or, much more likely, at some point we will create a computer system that is capable of designing a more powerful computer on its own, and the capabilities of machines will take off from there without any further human input.

That still doesn’t explain what intelligence is.

If you understand the underlying principles and mechanism of a system, a computer can simulate its operation and emulate it. And I don’t mean understand everything about it - just the basic building blocks. You can start with the basic laws of physics and model the behavior of a complex system, e.g. a weather pattern or the operation of a nuclear bomb. I see no reason why a human brain would be any different.

Self-awareness is the by-product of a sophisticated simulation of social interaction within a tribal group.

Simple brains (like a bee’s) are stimulous-response affairs. Get input A, perform action B.

More complicated brains run ongoing simulations of the real world. Sensory input gets fed into the simulation to keep it synched up. These simulations can be used to predict future real-world events. The brain then triggers actions based on these predictions.

Different animals focus on simulating different aspects of their environments depending on their particular needs. Social animals like dogs and humans spend a lot of their mental resources simulating interactions between members of their group. (The value of this in terms of food gathering and reproductive success should be obvious.)

To have a really good simulation of social interactions in a group that you belong to, you need to factor your own behavior into the simulation.

This is self-awareness. It’s the brain simulating itself. All social animals have it to some degree. Humans have it the most.

I believe that there is indeed more that computers can do. Alan Turing showed that with a sufficiently advanced computer (or merely enough time) one could compute essentially anything that a human can. This means that, barring the discovery of the “Soul,” it should be possible to create computers with imagination, creativity, problem solving, and even empathy. The rate at which our technology advances is the only limiting factor to how soon this happens.

Now, if computers merely execute code written by people, how could they ever come up with anything original? How could they solve problems which their creators and programmers could not?

A couple of the ways that computers can do this is through Neural Networks and genetic programming. Basically, computers can be set up with some simple (or possibly fairly complex) rules and then given lots of raw data to train on. Eventually, the computer should be able to make discoveries and infer things on its own, by comparing new situations with what it has already learned.

My personal belief is that computers could advance to the level of human intelligence in as little as 50 years. That’s based mostly on conjecture and some back of the envelope calculations involving Moore’s law and number of neurons/connections/rate of firing in the human brian.

Have you ever developed and trained a neural net?

Heh, as a matter of fact I have, a few times, in AI class at Carnegie Mellon University. One of the assignments was a rock/mineral recognition program. You feed it data on the hardness, color, etc. of a bunch of known rocks, and pretty soon it becomes able to classify new rocks it hasn’t seen before, based on what it was told the previous rocks were. It wasn’t perfect by a long shot, of course.

I know that neural nets currently have rather severe limitations on what they can do. No one would call the program I just mentioned “intelligent”. I just put them forth as an example of a type of program where the programmer is not explicitly specifying what the computer should do at every step.

This is in contrast to the way 99% of real programs are made today. It is easy to see why someone would say that a computer will never be able to do everything, given conventional programming techniques. Having to specify every instruction for every situation would take even the largest team of programmers forever. I’m just pointing out that there is a lot of active research on new types of programming which may eventually lead to a truly intelligent machine.

There are some known results in computability theory about the limitations of computers. For instance,

He did?

Ok, I should qualify and explain that a bit better. Defining intelligence can be very tricky philosophically, and so Turing wanted to avoid this and nail things down with a more concrete example. Thus, he came up with the idea of the Turing Test. The wikipedia article explains the Turing Test much better than I could, as well as the objections to it.

Now, the test is not perfect, but it is as good as the tests we use to tell that other humans are intelligent. If you can accept that other humans are intelligent based only on talking to them, then you should be able to apply the same criteria to machines.

So now the question becomes, can a machine ever pass the Turing test? Turing believed so, however I don’t think he could prove this mathmatically. It seems almost certainly true to me, however. Check out On computable numbers, with an application to the Entscheidungsproblem particularly section 6. Ok, that’s a bit dense, but basically he states that:

And proves this fact mathematically. Now, finite length strings in English corespond to computable sequences. Therefore, there exists some computer that can deliver the “right” answer for whatever question you ask it during a Turing test. How the computer actually does this is not specified, just that it should be possible in theory.

Sorry if I was overstating the case a bit in my original quote.

What Turing did was to invent a machine that could compute any Turing-computable sequence; he did not prove that Turing-computability matches up exactly with our notion of effective computability. That’s known as the Church-Turing thesis, and there’s debate as to whether that even could be proven.