Star Trek, Commander Data, and His SLOW Computational Speed

Oops. Forgot that Data was built by a computer scientist for his own amusement, not by a government.

Sorry!

Actually, the reason was because we just ran out of technology.

The first big speed increase came from the internal combustion engine. The ability to harness that much power gave us the first big jump. If we hadn’t invented jet engines, aircraft speed would have topped out around 650 mph. But just as we ran out of ability to go faster, the jet engine came along. The limits of the jet engine and metals would have had us top out at around Mach 3. To go faster, we had to invent rockets. Once we had those, we could get up to interplanetary speeds.

But now we’ve run up against those limits, and we don’t have anything to replace it with. So the curve of speed suddenly flattened right out.

The same thing is happening with computers. Each time we reach the limits of a certain chip technology, we come up with something smaller and faster. 15 micron processes go to 12 microns, etc.

But at some point we’ll hit theoretical limits. Once you can’t go faster by going smaller, you’re in trouble. Then you run into problems of shedding heat, and theoretical bottlenecks in trying to move vast amounts of information over small communication channels.

I thinkwe still have a few breathroughs ahead - molecular computers, optical switching, etc. So computers 50 years from now will be incredibly powerful by today’s standards.

But at some point, we’ll hit a limit. The curve can’t continue forever.

So far, the theoretical limit I see is the speed of light. But then, we can always hope for the tachyon :smiley:

14 years later…

With all that processing power, we still can’t put the thread in the right forum!

(Reported for forum change)

Note that he says “linear processing speed”. While that’s not defined, I would assume that to mean serial operations. We can make chips with a lot more operations per second, but they’re massively parallel. We can’t yet make a computer that can do one operation, finish it, and then do another one, 60 trillion times in a row in a single second.

Yeah, the era of Barbie Programming is coming to an end. “Barbie Programming” is writing a simple algorithm and, if it isn’t fast enough, waiting for the hardware to catch up and then buying a new processor. Or, in shorter words: “Optimization is hard, let’s go shopping!”

Compilers are improving. Parallelism is the new way forwards right now, but it’s hard to automate the process of turning a sequential algorithm into a parallel one, whether you’re talking about multithreading or SIMD opcodes; that said, I’ve seen GCC do some rather smart little hacks applying SIMD hardware to string processing. Right now, parallelism is job security for competent programmers, which means it’s expensive.

The big new frontier is actually a return to specialized hardware. Back in the Before Time, high-end computers had special, proprietary CPUs with specialized instruction sets. A number of them even used microcode, which is a layer between hardware and software which allows you to have more complicated opcodes than you could reasonably implement in pure hardware. These specialized CPUs existed because they were the only way to get the hardware of the time to run some kinds of code fast enough.

Then, in the mid-1980s, Intel released the 80386, the first x86 chip worth bothering with in the Real World, and soon all of those specialized CPUs were edged out of the market by commodity chips which weren’t specialized for anything in specific but were fast enough to beat problems to death with sheer straight-line scalar performance. These days, x86 has percolated up to the very top, with high-end supercomputers being made out of whole roomfuls of x86 computers on very fast specialized LANs.

Now that that era’s apparently coming to an end, Intel is putting FPGAs on their latest-generation high-end server CPUs. An FPGA is programmable hardware: Feed in the right data, and it reconfigures itself into a new kind of processor. FPGAs aren’t fast, but they can be used to make extremely parallel processors with tons of very simple, very specialized computing elements which all work on data at the same time, possibly assembly-line style, with each processor doing one computation and passing the result along to the next one; this is called a systolic array.

Since this is about computer speeds rather than Star Trek per se, I think GQ is fine.

Colibri
General Questions Moderator

PS. Please note that this thread was started in 2002.

So, when is someone going to come up with a version of Cripple Mr Onion you can play on your computer?

Today’s computer speeds are measured in OPS or FLOPS (floating-point operations per second). Commander Data’s speed is measured in CLOPS (cognitive-logical operations per second). One CLOP involves a complete cycling of a distributed memory similar to what Kanerva envisioned, and can be equivalent in utility to many thousands of FLOPs.

It should be noted that the Commander’s speed is less than 20 TeraCLOPS when in learning mode. (When in the high-speed 60TeraCLOPs mode, some learned data can be buffered and later stored in the distributed memory during dream-like rest periods.)

A viable way to slow down the CPU and achieve whopping great usefulness is to have content addressable memory.

Well you know , your vehicle registration database might have an index for make, an index for model , an index for body type, an index for color. an Index for owners names… Thats fine for data where there is a limited format of data being indexed…
But the problem with an index for poorly defined records is that the index is just as large as the data…

It might well be that in the future RAM it self is built to allow “tell me what cell contains this <value>”, because thats like so many instructions done all in one step… Its like having a million cpu’s going all at once, because the the RAM could do all the cross checkings … get the data flowing all at once…the same data is spread out to every single word of the RAM all in one step, and compared to that word there, by the gogolplex of ram cells, rather than the CPU having to drag each word back and then compare each word… content addressable RAM is therefore as powerful as having as many CPU’s as RAM cells… for the very often run step of identifying things given other data…

Calculate that out, if Data ran at that speed with that much RAM, and its content addressable RAM… you multiply them to get to non-content addressable RAM performance , at the specific task of database lookup of a non-indexed field.

Its plausible there is a physical limit to the density of such RAM in a volume the size of a human cranium…

We should just shut down the patent office, because everything that can be invented already has been.

*(Attributed to Charles H. Duell, Commissioner of US patent office in 1899. Probably apocryphal.)
*

Data is a portable device. We shouldn’t expect him to have the ultimate in speed and capacity for computing comparisons. Just like now, we have powerful computers, but we don’t put the top speed processors embedded in simple every day things like thermostats, home security alarm systems, cars, etc.

This is why for most of Star Trek they would come up with fictional measurements because what seemed liked it was advanced, within a few years of episode production time the measurement sounded horribly outdated. I remember there was a discussion by one of the writers on the series that this is why they came up with quad-whatever it was when referring to computer data space.

When you look at the old Star Trek series, the computers weren’t even digital, because the concepts of digital machines wasn’t common place yet. So it is fair to say that in the year the Next Generation takes place, they have a new technology that applies to do the same tasks. So it might be something beyond digital. Just like they claim to use sub-space communications which are faster than conventional radio systems.

When Data takes in a lot of input, they show him reading at super speed. This is because at the time, it wasn’t common place for viewers to understand wireless data communications. Plus it was an impressive effect to show he wasn’t human and could do something which is considered super human. But there are times when Data says “Accessing…” and while that isn’t clear, I use to think it implied he was retrieving from his own database stored in his head. But it could have just as well been him accessing a remote database. Such as “Blackjack…accessing!”

TOS ran from late 1966 through early 1969. Digital computers were in widespread use by that time (mostly mainframes). I can think of nothing in the show that indicates that their computers weren’t digital.

From my recollection, the interfaces of the computer systems were analog, backlit screens where items would light up, rather than being true digital video screens. See this picture of Uhura’s console:

I think that has more to do with the show’s budget than the vision of the writers. Certainly they had a huge digital screen when doing “FaceTime” with the local Romulan or Klingon guy threatening to blow their ship up.

Mainframes and minicomputers. It was a time of expansion of access to computing power, in fact, as transistorization was making hardware smaller and cheaper and the very first integrated circuits were being built. (Not microprocessors. Not yet. That would have to wait until the 1970s.)

If discrete items on the console were lighting up, that’s still digital. It might be pretty primitive digital, but analog would be if there were a moving dial (or electron beam, in an oscilloscope), or the like.

You’re talking about the displays. When people talk about digital computers vs analog computers, they’re talking about the internal architecture not the peripherals. Some of the first digital computers used punch cards for input and printers for output. They had no electronic displays but they were most definitely true digital computers.

According to this graph by Ray Kurzweil

Around 1988 you could only get 10^5 cps per $1000. So a machine that could do 10^13 cps like Data sounded very futuristic. However the modern generation of supercomputers can do 10^17.

Since digital computers have been around since WW2, I assumed they were talking about the appearance of the computers as they appeared on the show. Nothing else made sense to me.

Just as when one of my customers complains about having problems with their “CPU” I assume they’re referring to their PC and not their actual CPU chip.