Star Trek, Commander Data, and His SLOW Computational Speed

On a “Star Trek: Next Generation” episode, Commander Data is said to possess “a total linear computational speed rated at 60 trillion operations per second.”

This strikes me as rather glacial, given that this Star Trek series depicts life in A.D. 2300.

My guess is that a top-end supercomputer today can perform 60-100 billion operations per second. Wouldn’t Data’s computational speed therefore be much slower than the recent arc of technological development (raw speed and miniaturization) would suggest?

[Note to mods: this pertains to computing speeds, not to television, per se.]

I like the fact that Data uses keyboards. He’s one mother of a typist, but the fact remains, he still uses keyboards.

We’re hitting 40 trillion Flops on supercomputers now.

http://www.top500.org/list/2002/06/

1.23 trillion on specialized equipment you can buy for your home PC :slight_smile:

http://www.nvidia.com/view.asp?PAGE=geforce4ti

Yeah, but they don’t say what kind of operation. In The Future they may have microprocessors capable of executing complex algorithms with a single instruction.

Well, first of all, you do have to realize that Star Trek technology is predicted via a rigorous scientific technique known as “making it up”. So we should really be grateful that they even used vaguely credible terminology instead of the usual “neutrino pulses per fortnight” techno-babble.

But let’s pretend that this is a meaningful prediction. The first problem is that “operation” is a pretty flexible thing when we talk about computation. It could mean a single clock cycle of a processor (our current MHz/GHz) or something more “chunky”, a floating point operation, for example, or even a logical inference. Clock speed by itself is relatively uninformative. We can safely assume that Data’s brain will be based on a completely different architecture than current computing technology.

Now, will clock speed continue to increase? Most people think we’ll eventually hit some physical limits. You can only switch things on and off so fast. As you switch things faster, it takes more power and more heat dissipation. 60 trillion operations/second would be pretty impressive. And note that this is “total linear computational speed”. It’s very, very likely that future computers will have highly parallel architectures with many processors operating simultaneously. (Certainly the human brain has a high degree of parallelism.) So Data’s brain might have tens of thousands of processors, each operating at the given speed.

Yes, but all these fast processors (specially things like geforce 4ti) are RISC processors (reduced instruction set) and do comparatively basic jobs. data’s head is likely to be a CISC processor (complex instruction set).

and another thing - a pocket calculator is faster than a human brain as far as electronic signal speed is concerned, but humans are (sometimes) cleverer than pocket calculators.

Yes, but all these fast processors (specially things like geforce 4ti) are RISC processors (reduced instruction set) and do comparatively basic jobs. data’s head is likely to be a CISC processor (complex instruction set).

and another thing - a pocket calculator is faster than a human brain as far as electronic signal speed is concerned, but humans are (sometimes) cleverer than pocket calculators.

Just thought I’d throw in this little howler from the novelization of Star Trek II: The Wrath of Khan, by Vonda N. McIntyre, published 1982. The Genesis Project scientists on the Regula One space station have to clear space on their main computer and the senior staff discover a computer game written by some of the junior staffers. They prepare to move the game file to one of the smaller computers:

In more refent years, the term “kiloquad” has been used in some vague way as a measure of computer performance. The phrase is deliberately left undefined because no matter how fast they describe a future computer, there’s an excellent chance it will seem ridiculously slow within the next few years.

Data was vaguely-written as well, so he could be just as strong or smart as necessary for a given scene, yet surprisingly dumb and slow when the story needed to be dragged out a little longer.

It always annoyed me when the script followed this formula:

Data: I have read the complete works on (war, love, humour, death, law, justice or some other philosophical matter) from X number of Federation worlds. Can you give me your advice?

Then the person he was talking to (be it Picard, Riker, Worf, Crusher, Troi, etc) would give him some vague touchy-feely crap and Data would be satisfied(!) If, for example, I had done a complete survey of “love” and read the complete works of Socrates, Aristotle, Plato, Jesus, Buddha, Mohammed, Confucius, Byron, Freud etc plus their equivalents on a hundred or so other planets, I would hardly expect a buddy to be able to offer something I had missed. The “Oh, Data, you’re so smart, yet so dumb” bit was ridiculously overused.

I don’t see any problem with that. He didn’t say megabits, or megabytes–he said megs. Maybe he meant megaquads, or megafoozles, or mega-yottabytes.

In Deep Space 9 and Voyager, I have seen used megaquads, gigaquads, and even teraquads. It seems to me that in general, if you replace “quad” with “byte”, then the numbers they show on Star Trek would be roughly equal to the numbers today. For instance, the Voyager computer may have a total capacity of a few teraquads. My guess is that a “quad” is a quadrillion bytes, or one petabyte, making a “meg” one yottabyte.

Of course everyone realizes that Star Trek writers are not in the business of predicting the state of the art, but I still roll my eyes when someone (even in Next Generation) make a request like, “Give me a list of all planets within 50 light years” and the computer replies with “Processing…”

You could blame that on poor programming. Of course that just shifts the eye roll to a different aspect. Also, the computer is multitasking and not devoting its entire power to any single request I would think.

Of course this is Trek, the king of Technobabble, I’m sure if somebody really wanted to they could come up with reasoning on why it takes so long and yet make it seem impressive.

Yeah I know. That person is usually I. :smiley: But I also realize it’s a television show, and that if they had the ability to redo the scene, they would have done it differently.

Well, that and clearly the rate of speed increase will eventually have to slow down. We’ve got maybe 20 years of Moore’s Law to look forward to, then we start running into some really tough physical limits. Maybe there will be another breakthrough of some sort, but barring something we don’t understand today, I would think we’ll run out of speed increases fairly soon, relatively speaking. Maybe within a hundred years?

Or maybe he meant that people are still playing Quake in 300 years.

(That’s right… guess how big Quake is? :D)

It’s my opinion that that’s an extremely conservative condition you’ve casually thrown in there. I would be extraordinarily surprised if 100 years from now we don’t know a lot that we don’t know today.

Did anyone else see Voyager this week?
We learned that Deutirium burns as hot as plasma, and that is drilled for :eek: under the enormous pressure of 5000 mBar :rolleyes:- that’s nearly 75psi!

Star Trek has nothing to do with science (except by accident)

That means jack. What the heck is" linear computational speed?"

Of course, you have to remember that it came out originally in 1989 or somewhere around there. I don’t think we have 486 back then yet.

Reminds me of the classic science fiction RPG Traveller .1st published in the mid 70’s it used a tech level system which ran from TL1 to over 15 going.

TL1 Stoneage---->TL7 Modern day ------>TL15+ Star trek stuff.

with examples of the capablilties of the tech at each level.By the time it was published the capabilities it gave for a tech level 7 computer was behind the times. The average PC sitting on your desk today is according to the old tables the sort of thing you’d expect to see in a TL14 battle computer for a million ton battleship.

That may be, but there’s still a point at which you can’t continue to double your speed every couple of years. There are fundamental limits of energy and communication that we’ll run into.

You used to see charts in the 70’s, showing how man had gone from horse-speed in the late 1800’s to the X-15 going Mach 5+ by 1965. The slope of that line could have been calculated to give you something like Moore’s law, and if someone had done that in, say, 1930, it would have proved to be remarkably accurate for the next 40 years. In fact, the rate of increase was even more astonishing than the rate of increase in computers, because it was a geometric progression. We went from horse-speed to car speed in a decade, from car speed to slow aircraft speed in another decade, then up to the speed of sound in the two decades after that, and then with the space program suddenly we were going tens of thousands of miles per hour.

You used to be able to find article predicting that man would be travelling faster than the speed of light in the early 2000’s, based on extrapolating that curve into the future.

But since the 1970’s, the rate of increase of speed has stopped. We ran into a lot of fundamental physical limits.

The same thing will happen to computers. The phenomenal speed increases we’ve been used to for the last 30 years will soon begin to slow down, and eventually almost stop.

then there’s also the tendency for the military and government to use slower, more outdated hardware - accepting speed as an acceptable trade off for reliability.