And games, of course. If we were to ever get up to a terabyte of RAM, for instance, I guarantee that games would soon be coming out that demanded that full TB.
Yes, and the usual “blue screen of death” will now be a deeper, richer hue of blue.
Tripler
Oh. . . and count me in on the “porn” vote.
You mean the five richest kings of Europe?
[sub]Someone had to say it.[/sub]
I disagree. As I work in the Adult Industry, I think what drives technological advances more than porn is graphics-heavy applications, and games. Admittedly, games are less prevalent on PCs than they used to be, the console taking a lion’s share of the market these days and probably forever more, but porn does not rely on technological advances beyond broadband (which is also helped by online gaming) and HDTV (which is also helped by regular TV and DVD entertainment).
Though it undoubtedly has an influence, it would surprise me a lot of it really was still the driving force behind PC technological advances.
28K of memory (on our PDP-11) is plenty for anything we want to do. My officemate in grad school, about one minute before my program ran out of memory.
The quotes are all very amusing, and not very new, but I think you’re not getting what exponential growth in address space really means.
Anyhow, computers from the IBM 360 period could handle larger data sizes - the microcode just looped. Far more efficient than making every operation deal with it.
In any case, it is already decided. All our extra transistors are going to cache and multiple processor cores.
The computers my company makes already can be configured with a Terabyte of RAM. I suspect this will get to the home market in five years at most.
The games won’t get the full terabyte, since Microsoft’s latest travesty will require 70% of it.
The problem with increasing data/address path (32 - 64 - … - 1024) is finding a packaging/construction medium that allows for the additional physical connections. 64 bit CPUs have ~1000 pins now. Try adding another 2000 (additional data and address lines) to that and you are in real trouble.
So I suspect that this will only occur in very specific applications.
Si
That’s already solved. Parallel connections are impossible to do any more, because of interference between the signals and skew. Modern processors use very fast serial connections, and Serdes hardware, to serialize data and address busses, send them off, and the deserialize at the receiving end. Google for serdes for more information.
As others have mentioned, those quoting myopic technologists from the past don’t really understand the point of increasing the wordsize of a computer. Each additional bit of address space doubles the amount of memory you can have. Long before 1024 bits, we’ll be able to uniquely address each quark in the known universe a billion times over. I’m willing to believe that there might be a good reason for that, but it’s certainly not self-evident.
Looking at this Wikipedia page of different models and their wordsizes, you can see that, if anything, the rate of change is slowing down. Once you get past the early history of wonky 40b words and things like that (starting at Intel 4004), it looks like each step from 4- to 8- to 16- to 32-bit wordsize has lasted longer than the one before it. I predict that this trend will continue. We’ll get to 128-bit addressing someday, but not real soon. I wouldn’t be either way on needing a wordsize larger than that, unless we make some truly revolutionary advances in computation.
Aside from which, I believe all thsoe quotes are apocryphal.
Games will probably continue to increase, if only because sloppy programming virtually ensures that games will continue to require it.
I can see 1024-bit computers being used in genome research and AI and things like that, but for personal computing put me in the “what are people going to do with it?” camp. These days, it seems like the big crimp in performance isn’t processing speed, it’s the speed of internet connections.
Aye, there’s only two things certain in life: Taxes and software bloat.
pwn noobz
Solitaire, of course!
AI? The problem with AI isn’t processing speed. I remember when the Pentium came out, and USA Today said that this would make AI feasible for sure. Being a supermini user, I laughed my ass off.
There is, though, a real improvement in calculation speed with the adoption of 64 bit, because the very common 64 bit float (what C languages call a double today) is much more versatile than the 32 bit one. Rounding error problems with 32 bit floats are so common that programmers often avoid them without having a more specific reason ahead of time. The FPU in a PC most often accepts and returns 64 bit numbers (though it has an 80 bit stack internally and can return 32 and 80 bit floats too). So in a practical sense, mathematical values that are not integers are 64 bit items much more often than they are anything else. Also, I’ve certainly overflowed 32 bit integers in many practical situations, but have never done so with a 64 bit one.
For these reasons, processors that move 64 bit words directly will eliminate a kind of overhead that none of the other smaller processors have in many applications, which would rarely happen again.
IIRC the early big mainframes often dealt with 8-bit bytes and 64-bit words for the logical reasons above, though they were using perhaps an 8-bit data path and had no special ability with moving the 64 bit words.
I have a feeling that no matter how powerful we make computers, we’ll find a way to use up the capacity.
64 bit arithmetic has always been useful. However, older machines might have had a limited number of 64 bit registers for double operations, and 64 bit FPUs. Moving data to these registers would take additional microoperations, and thus be slower, but that was a good tradeoff considering the overhead required to make everything 64 bits.
And we need to mention CDC machines, which were always inherently 60 bits. And ones complement. Oh for the days when instruction sets were really different!
We won’t DO anything. That’s the beauty of it!