Computing power, and the lack thereof.

Recently when browsing a few articles on cloning or sequencing DNA from extinct animals, I’ve seen the excuse “We don’t have the computing power” used to explain why we are unable to achieve the goal faster.

What’s the deal with this? Surely if it’s a matter of processing power, the big companies ought to be able to produce something that can do it, even if it is a building sized monster. So are we really unable to make a machine that can do these kinds of things, or is that code for “We don’t have the funding”?

It depends.
In some cases your latter proposition holds.
There are some jobs that simply require more processing power than ANYONE could afford.
Some jobs, in fact, would cause us to run out of atoms in the universe to build computing systems out of.

The current supercomputing model consists of taking a bunch of our fastest processors, hooking them together via one fast media or another, and putting a fraction of each job on them.
It works nicely for sequencing DNA, but other problems simply cannot be parallelized due to dependency issues. When I need to know the result of my first calculation to work on my second calculation, I can’t simply have my friend figure out the second result while I’m still working on the first one.
Some very smart people have come up with some very clever solutions to this problem, including “guessing” at the possible outcomes, and solving for, for instance, 8 of the upcoming possible questions. This is a neat angle to work, but your benefit from adding 1000 additional processors won’t be 1000X as much work, or even 850X as much working; it might be closer to 50 or 60 times as much work, depending on specifics.
At a certain point, my hypothetical problem above will see absolutely no benefit from increasing the # of processors thrown at it.

Once you get to that point, you’ve got two options:

  1. Wait for chipmakers like Intel/IBM/AMD to come up with a faster chip
    or
  2. Make a chip that is ONLY good for your specific question, and hope it can execute the job faster than anything that the major CPU vendors can come up with. [The CIA does some of this for encryption work, for instance.] This option might be your best bet, but there are still TONS of situations where you CANNOT get a ‘solve time’ down to a year or even a single lifetime.

Interesting. I had no idea that there mathematical problems so big that a supercomputer couldn’t run them in a lifetime!

Related: Does this mean that we are nearing the limits of computing power? If so, why? Would it be more practical to try and design physical experiments to provide evidence towards validating these monstrously difficult theories?

Just think about weather prediction. You can make the model arbitrarily accurate (add a term for gnats beating their wings), to the point where no conceivable computer could ever solve the problem faster than the weather itself changes. At some point you just have to make the problem simple enough to solve in a reasonable time.

Near the limits of computing power?
We’re not done increasing computing power, although I suspect that the next step in really advancing computing will be something dramatic, like some of the prototype projects we’ve been seeing in DNA-based computing. I’m no expert on that jazz, though.

Physical experiments have their place, but also their limits:
Modeling a nuclear explosion in NYC’s Central Park, for instance, is a better idea than actually doing it, although it would make the NYPD’s traffic enforcement duties hella’ easier.
and, of course, we can’t create a second Earth to test how global warming will change the lifestyles of Arctic life… although I’m afraid we used the original to perform that test.