How would software for super fast computers work

I don’t know much about how software is written but I was reading a book yesterday about nanotech that said today’s computers can compute about 10^10 bits of info a second but a functional quantum computer could do closer to 10^51 bits a second. But even if that happened wouldn’t software still be written by hand by code programmers? Wouldn’t all that extra CPU power be wasted because I don’t see how people could write software by hand that required anything resembling that much computing power.

Umm… I’m not quite sure how to answer that question.

First off, as a programmer, one thing you need to understand… the basic virtue of computers, really, is being able to loop over large sets of possible results, do some calculations on each one, and return some kind of results. There are a lot of other things that computers can do, but fundamentally I’d say that without the power of looping, they’d be nothing more than curiosities.

All kinds of problems and goals have been identified that are easy to specify in instructions for a computer, but would require looping over such a large set of possible results, or whatever, that the results would take too long to be of any use. Increasing the speed of fairly conventional processors by 40 orders of magnitude would open up a lot of them.

To take another example, gaming or other real-time modelling, where the fundamental loop is in evaluating the modeled world, showing it on a screen, and then advancing the model through another fractional-second ‘tick.’ It’s fairly easy for programmers to punch in a model world that is so complicated that a computer processor can’t keep up with it in anything like real time… that only one-tenth of a second would pass in the game for every second of real time. Obviously, this isn’t desirable. So an increase in computer speed would increase the complexity of the models that could be programmed.

I hope that this gives you an idea. From what little I know of quantum computing, the problem isn’t going to be that we couldn’t write software that would require that much computing power. The problem will be, to actually get to USE that computing power, we’re going to need to figure out some new ways of programming, because quantum computers can’t just work as a huge speed upgrade to traditional style processors. (too bad.)

PS: Although software is still ‘written by hand’ in general, it’s hard to understand how much complexity of software is already possible because of co-operation and hierarchical organization… an incredible amount of work has gone into building complex and subtle code libraries that, essentially, increase the power of what an application writer can do a thousand-fold versus 25 years ago, because with one line of code he can call a library routine that took a thousand lines of code to originally implement. Plus, at some point soon, we might actually get to the point where we can instruct computers to write programs for us… and not have them screw up too badly at it. :smiley:

Software runs just as fast as the machine will let it. It doesn’t need to be written any different to work on a faster machine (generallity.)

An application might be something as saying, “Read 8000 bytes from RAM, output that data to the ethernet line.” Running it on a faster machine means that you’ll be moving more 8000 byte chunks per a second than you would on a slower machine. But regardless of the computing power, the instructions are perfectly fine, since “Move X from Y to Z” just isn’t all that complex, nor does it need to be.

Now, new hardware can introduce special functionality that a programmer can take advantage of, but that’s a separate topic and so long as the method by which the programmer is meant to interact with the functionality isn’t entirely unintuitive, there is no real problem for the programmer to use it. Computers already process much more data in a reasonable time than any human could ever hope to follow that it really is a bit late to worry about us being able to code for that.

I think you’ll find better answers to this question in GQ, Wesley Clark. I’ll move it there for you.

I agree about looping being the most important thing for many applications. I am a business analyst consultant/developer and I work with large data sets.

A big part of my job is writing SQL that loops through large numbers of records (say 2,000,000) combines that with 15 other tables that have as much or more data and gives me some result. The result may be a multi-million dollar bill generated over a certain amount of time.

I have been in this business for 8 years and computers we use are roughly 10 - 30 times more powerful than when I started. Some of the stuff I wrote back then would take a whole night and we just couldn’t do some of the things that we wanted because of computing limitations. Old timers tell me about simple jobs that they wrote 25 years ago that would take a whole weekend. Now, most things take minutes but I occasionally do something really complex that takes hours. Faster computers could whip through that in no time as well.

Programming will always require human people. Computers are dumber than a June bug with brain damage but they are really fast. Much of the value of a fast computer is analyzing millions or billions of pieces of information. The instructions to do that may be pretty simple though.

The instructions to do that are always extremely simple at a hardware level. Always. That gets to the heart of what computers are: Machines. They have no ability to think in any way, shape, or form whatsoever. A computer will do exactly the same thing a billion times a second for all of the same reasons a rock will hit the ground when released from a truck.

Anyway, there are sometimes software-level (even application-level*) ramifications to hardware-level changes. The big thing now in the desktop world is dual-core and multi-core processors, as heralded by the Pentium D and, to a lesser extent, the Cell chip. The upshot of a multi-core chip is that one piece of silicon now contains the equivalent of multiple CPUs, and to get the most out of a chip like that the software has to have something all the cores can do at the same time. This is called parallelization, and some problems can be parallelized easily and some cannot. The software will (probably) run acceptably fast if it can’t be parallelized, but it will keep one core doing the equivalent of twiddling its thumbs while the other does all the work.

(The whole point of software development is abstraction. This is done in layers, with each layer (attempting to*) shield the one above it from the complexities of the one beneath. The hardware level is the lowest level, with the OS level built on top of that, and the application level built on top of that. There are many complications (often, there is no OS and only one application that does everything, as in a VCR) but that division is very common.)

**(All abstractions leak. All of them.)

It is very easy to write software that requires enourmous computer resources. A full scale version of the software I write would take around 2 weeks to run on your average pc. This is in scientific computing where computers are always too slow, I have to make tons of approximations just to get it to be able to finish a calculation in that short of a time. For example, try solving a quadruple integral with 500 gridpoints in each dimension (which is really just summing a bunch of numbers together) or diagonalize a 1000 by 1000 matrix.

As a non traditional programmer, I have some insight into the question.

Traditional programming techniques approach the solution of a problem as a step-by-step process. Witness flow charting.

More advanced programming looks toward multi-threading. This means doing two or more step-by-step processes at the same time.

There is a long established, and mostly ignored by acidemia pool of progrmmers who engage in massive multi-threading. They run thier programs on hidiously slow machines known as PLCs (google for explainaition). Thier programs may have several thousand threads running concurrently. There is no speed, because these programs run on traditional hardware that is only emulating a true massivly multithreaded processing enviroment. If REAL hardware were built that could actually impliment a multithousand rung ladder-logic application then you would see that the answer to your question is an unqualified YES…the art of programming is certainly advanced enough to take advantage of any and all forseeable hardware advances.

Well, not necessarily. A holy grail of AI is a computer that can program itself. (Not saying it’s even on the distant horizon, but simply that the possibility exists.)

At any rate, I’m thinking that you’re missing a bit about programming – source code is just “higher level” instructions that are compiled (or interpreted). That process is deterministic and can be designed to find parallelism. My point being that better (dare I say “more intelligent”?) compilers will allow code (that has the requisite parallelism) to run on any given architecture.

If you’re at all interested, the Wikipedia article on supercomputing is pretty good. Note the bit under the “processing techniques” section that mentions that some graphics cards have teraflop computing capability. You also might be interested in the following references concerning intelligent memory/processing in memory (which relies in a big way on multi-threadedness, mentioned by Kevbo):

Here: C. Kozyrakis, S. Perissakis, D. Patterson, T. Anderson, K. Asanov´ıc,
N. Cardwell, R. Fromm, J. Golbus, B. Gribstad, K. Keeton, R. Thomas,
N. Treuhaft, and K. Yelick, “Scalable processors in the billion-transistor
era: IRAM,” IEEE Computer, vol. 30, no. 9, pp. 75–78, 1997.

Here: J. Brockman, P. Kogge, V. Freeh, S. Kuntz, and T. Sterling, “Microservers:
A new memory semantics for massively parallel computing,”
in Proc. of the 1999 International Conference on Supercomputing, 1999,
pp. 454–463.

Pfft. I did that while sittin’ on the can this morning.

Yeah? Well I bet your answer was crap.

Not quite. Many instructions for CISC machines, or even some old IBM mainframes, were reasonably complicated, doing things like block copies. They got implemented by reasonably large microcode sequences. Less complex than an application program, true, but not all that simple. Not to mention that instruction sequencing and speculative execution is far from simple. Not all those hundreds of millions of transistors in processors today go into cache.

I’m already using my computer to participate in a “grid computing” program through http://www.worldcommunitygrid.org/ - it’s something to do with analyzing protein-folding, and it may be a great help to medical science at some point in the future. Basically, some central computer is sending my computer (as well as others) a bunch of data and instructions on what to do with it, then my computer is sending back the results. It’s nothing they couldn’t do on their own computers, but it would take much longer.

This project has been going on for a few years, I think, and if speeds really became 10^21 times faster, the results of this could become available much faster. It’s not that the program requires a certain processing speed - it’ll just go faster if you have a faster machine.

To look at it from another angle, here’s a program that can fully use any amount of computing power it is given:

  1. Go to step 2
  2. Go to step 1

That’s it. As you can see, there is no “human factor” involved. Humans don’t churn out 100000000 instructions for even today’s computers, they just give a lot less instructions that get executed many-many times. Looping over many instructions, as was said earlier in the thread.

You completely, utterly, 100% missed my point. My point was that computers do not think, but merely obey the laws of physics like any other machine. ‘Simple’ in this case was defined as ‘mechanical, not requiring actual thought’.

Even the polynomial-solving opcode implemented in the VAX was dead simple compared to what rats do every moment they’re alive, let alone conscious.

Merely trivially obey the laws of physics, as opposed to the nontrivial interactions that occur within the brains of living things. Understanding transistors is a lot easier than understanding groups of neurons.

:confused:

Hey Pappy, you hear about these ‘motorcars’ they’re going to built? Like a carriage that don’t need no horse to pull it! And I hear they can run so fast a two-day journey would be cut to a single afternoon!

Yeah, well how would they ever make roads for 'em? How could people, no faster than folks are, are ever make a road you could drive that fast on?

:confused:

Not in any way to detract from the good responses others have already given on how computers execute loops and plug in results into the next iteration and etc — I don’t have anything to add to that — but I’m lost before we even get that far into your premise. Even if

:confused:

Hey Pappy, you hear about these ‘motorcars’ they’re going to built? Like a carriage that don’t need no horse to pull it! And I hear they can run so fast a two-day journey would be cut to a single afternoon!

Yeah, well how would they ever make roads for 'em? How could people, no faster than folks are, are ever make a road you could drive that fast on?

:confused:

Not in any way to detract from the good responses others have already given on how computers execute loops and plug in results into the next iteration and etc — I don’t have anything to add to that — but I think I’m derailed from your logic before we even get that far into your premise. Even if some hypothetical computer program was amazingly complicated (like an entire modern OS) and made full use of the hypothetical supercomputer (at least in the sense that you wouldn’t dream of inflicting that OS on the feeble processors we have today because that OS would bring today’s computers to their knees. Like trying to compile and run OS X or XP on an 8 MHz processor): it’s not like your coding team has to write code at some minimum speed in order for the processor to execute it at a certain speed.

And it is, unfortunately, no great challenge for a team of programmers to come up with some badly-written bloated code that requires an astonishly fast computer to run it acceptably fast. No, the challenge is to write concise, efficient elegant code. The tight code gets the best use of whatever processor you use to execute it, whether it be an 8 MHz processor or or a futuristic 500 TeraHertz octuple-core firebreather.

You can run the old programs of yesteryear on a modern computer, if the hardware and software architecture still exists. Find a copy of the old Lotus 123 for MSDOS and install it on your XP box and it still runs. You may not be satisfied with it — you may want more from your spreadsheet program these days — but your computer doesn’t need the latest version of Excel, it doesn’t require the more complicated & fancy-featured code, the old code that worked back in '83 works now also. Now, admittedly Lotus 123 isn’t generally going to “make full use” of your processor — either of them, since the old 8 MHz CPU is mostly sitting idle while you input your spreadsheet numbers — and that’s the case for the overwhelming majority of the uptime of personal computers — but you could compile a program that would tie up any processor for a long time (e.g., breaking 1024-bit encryption by throwing every possible combo of keys at it until you hit the right one) to run as a simple DOS program and it would run on either the new or the old box and would still quite thoroughly occupy and not “waste” the new box.

Hope that made sense.

I would actually disagree that this program ‘uses’ any computing power at all; it consumes computing power like a… like a mother______, but it produces no useful results and therefore just burns CPU cycles to no avail. :smiley:

This is, I think, not a nitpick but relevant to the OP discussion. If you get a faster processor, or a fancier one, and existing programs burn more CPU cycles per second on the new hardware but do not produce useful results any faster, then they’re not really using the power of the new hardware in my thinking.

This isn’t quite so bad, but my MPEG-encoding program has a switch to allow it to take advantage of my hyperthreading processor. When I use this switch and the computer isn’t really doing anything else, the program consumes about 50% more CPU resources and completes a long MPEG in only 15-20% less time. (Presumably there’s a lot of overhead in dividing the work to be done up between two logical processors, and rejoining the results into a single MPEG file.)

How about a real-world example, to a real-world problem. Suppose you have a really, really big number (let’s call it n), and you want to factor it (this is important, for instance, in breaking many types of encryption). It’s really simple to write a short program to do that:

Step 1: x = 2
Step 2: Is n/x an integer? If so, x is one of the factors. Spit out x and end the program. Otherwise, proceed to step 3.
Step 3: Add one to x
Step 4: Is x bigger than sqrt(n)? If so, then n has no factors; it’s prime. Tell the user so, and end the program. Otherwise, go back to step 2, and repeat.

Give me a couple of minutes, and I could write a program to follow those steps in any language you want, for any computer you want. And that program will perform a useful task. But if the number n I start with is very large, it might nonetheless take my simple program a very long time (billions of years, even) to finish. In this case, it should be clear why a much faster computer would be useful.

Now, granted, I could also write a somewhat more complicated program (but still simple enough to be written by a human) which could do that same job a bit more efficiently. There are, in fact, such programs out there which are more efficient. But nobody yet knows of a program for this job that is a whole lot more efficent than the one I just outlined, and most folks don’t think that there is such an efficient program. So the only way to make it go faster is to use a faster computer.