As lazybratsche says, this is a very very large cluster of mostly otherwise normal computers. A difference is the presence of Nvidia Tesla based nodes as well. These processors are specifically designed for fast numerical work, and not seen in ordinary computers - although their genesis was in graphics processors.
There are a few nasty secrets about these large computers. Many jobs are what are known as “embarassingly parallel.” These simply involve lots and lots of the same program running on lots and lots of nodes, usually with different parameters or initial data sets. A lot of the time these very large computers are actually never run as a single supercomputer, and are always divided out to a large number of researchers that use a small fraction of the system. But it is easier to get money and attracts more prestige (which is heavily related to the getting of the money) if you build one huge facility rather than giving each research group a cluster of computers each. Beyond the embarrassingly parallel there are computational problems that can be parallelised, mostly you don’t get 10 times speedup for ten times the processors, sometimes the falloff is depressing. It is hard to write parallel code, and very hard to write parallel code that scales well to a lot of processors. But is can and is done for some problems. Some even gain speedups in excess of the number of additional processors. A special class of problem are those that need lots of memory. Or those that access very large datasets.
Computational science is odd in a way. There are corners of lots of science that are amenable to attack by big compute, and these benefit greatly. But there are others that don’t benefit at all.
Traditional areas for big computer are things like analysis of fluid dynamics, complex structures, chemistry, molecular dynamics. As computers got bigger hitherto infeasible tasks became possible. More complex chemistry, simulation of quantum chromodynamics, more complex molecular dynamics - with the ability to step up to biological processes, and thus areas such as drug design. Very large scale searching of data - and DNA and protein sequencing is a big one. Protein folding - as a special case of molecular dynamics remains a big and difficult problem. In fluid dynamics, turbulent processes remains extremely difficult.
Many problems in science grow very quickly in size with their complexity. Computational chemistry at the quantum electrodynamics level grows with the fifth order of the number of atoms. Double the number of atoms, 32 times the computation. If you want to play with big systems you need enormous computational power. These are some areas where the scientists can use pretty much all the computational power you can ever give them.
In some areas, the advances in fidelity of simulations is alone to make paradigm shifts in the level of understanding.
On a sour note, I do worry that some areas of science get funded to a level that does not reflect the value of the science done, simply because they can exploit large computational facilities to help. These facilities are often funded as a prestige thing, and to some extent are funded outside of the normal peer reviewed process for research funding, giving those areas that can benefit a double helping.