What advances are being explored on the cutting edge of computer design these days. Is there any new supercomputer that’s on the horizon that’s on par with the sort of advance that a Cray-1 represented over existing computers back in its day?
The big advance recently was the rise of GPU computing. Originally designed for drawing pretty pictures for video games they actually evolved into really powerful massively parallel generic computing devices and are what the current crop of fastest supercomputers are based on (e.g. Frontier at Oak Ridge has 37k of them).
The big thing “coming down the pipe” is quantum computing, but of course its been “coming down the pipe” for a while now, though there is some evidence there might be some functioning computers based on it in the near-ish future.
There’s not much need for a single large mainframe style computer anymore. Multiple processors, loosely or tightly coupled are handling tasks on enormous scales. As mentioned already quantum computing could introduce new capabilities in a practical manner before too long, but it’s still down the road. Most of the heaviest computing tasks now involve searching and indexing massive amounts of distributed data instead of intensive numeric processing.
Indeed. Seems like the concept was floated over 30 years ago. (here’s an old paper I found as an example) [quant-ph/9503016] Elementary gates for quantum computation (arxiv.org) but of course it takes time and resources to develop functioning technology. All the while, the speed of new discoveries is not slowing down.
Modern supercomputers gain more power the same way as modern desktop computers–throwing more cores at the problem. Which is fine for tasks that can be broken down into an arbitrarily large number of parallel chunks. For problems that can’t, not so much. It doesn’t matter if you have 100,000 cores when your task runs on only one of them.
Since the hardware is basically off-the-shelf and easily scalable, there doesn’t seem much point in naming a specific supercomputer, as though it’s a static thing that will always be reused in the same configuration (though I know that does still happen).
The projects are paramount – What have a metric fk ton of GPUs managed to solve now – is it for protein-folding, brain simulations, cosmology or string theory?
They’re still trying to make graphene chips work. They may have found a way to create a usable band gap with a single layer of graphene, previously thought impossible.
I don’t fully understand everything in this paper, but it’s one of only a few Open Access papers from the Royal Chemical Society I’ve ever encountered:
Currently there is a lot of emphasis on packaging. Chip on wafer technology allows diverse technologies to be interconnected with semiconductor aluminum deposition resolution. This means faster and better, but long term it means cheaper. It means that the optimum laptop function can be reduced to a single commodity component whose assembly is entirely automated. Also that the highest performing computers can become commodities by integrating multiple chip on wafer systems on a common substrate. Those then can be assembled into massive computing arrays.
The motivation for this is that with the enormous costs associated with semiconductor facilities, the manufacturers can only exist if they get a major portion of the value added from their products. They cannot exist by selling fully tested computer and memory chips for the price of a light bulb. But, when they supply a complete computer in a single package the price goes up and they get it all. Guys like HP and Apple have to differentiate themselves with design and software and the end user gets a better cheaper device.