A 300 qubit quantum computer could calculate all information in the universe

A professor from MIT mentions in that article how a 300 qubit quantum computer could calculate all information in the universe.

So how does the math of that work? I believe every 10 qubits adds 3 zeros to the processing power of a computer, so 300 would be what, roughly 10^90 calculations? Is that really sufficient to calculate all the information in the universe? I thought there were about 10^80 atoms in the universe, plus how much information is in each atom, and how much information is changed in a second, etc.

What if there are two such computers in the universe? Does each encompass the other?

Clearly there is some slight of hand in the assertion.

One might also ask how the information is retrieved, and in what form. And is that included in the calculated information?

Right off the bat, how do you store all of it? Given the rumors that Utah data center is design to hold >10^20 bytes (1) too lazy to Google it and 2) this is only a rumor), that’s not even a frog fart in the universe.

For most encryption cracking purposes, however, I’d say this would be a pretty nice tool to have.

Here’s how I understand things.

You have certain kinds of problems where

  1. You know a single correct answer exists
  2. You know a way to enumerate every possible answer
  3. You can check your answer to see if it’s right

So far so good. Any problem where you can easily make a list of answers and check is solvable. However, digital turing machines can take more time and memory than the lifespan of the universe to actually reach an answer in some cases. A simple example is encryption keys - you know a key exists, you know how many bits it is, you can check if a candidate key is correct. If you could just try all 2^128 combinations, you’d unlock that encrypted file no problem.

Anyways, exactly how entangled q-bits let you try a lot of combinations at once is something I don’t fully understand, but somehow you can build computer hardware in a way that the electrons or photons representing the q-bits will have a tendency to settle into a configuration representing the answer. You then read them off and see if it’s right, and try again a few times to statistically converge on the answer.

So 300 entangled q-bits represents more possible values being tried at the same time than the number of particles in the universe. It does not mean you can build a computer that can tell you what color the rocks are 10 light years away.

[Bill Cosby]Right! What’s a qubit?[/BC]
Given Godel’s Incompleteness Theorem, I wonder what sort of logic system could actually “map the universe”.

Define “all the information in the universe.” Are you/they positing that calculating the position and related characteristics of all the atoms in a brain would be equivalent to mapping all the information in that brain?

Or is this another “physics, dammit!” sort of postulation? :slight_smile:

It’s that game with the beak and the snake and the WA-OOO and the @#$%…heh heh, kids.

If it’s possible to build such a thing, perhaps it can answer The Last Question (i.e., "How may entropy be reversed?).

But are quantum computers even possible? They seem on the realm of the science-fictional.

Even if there’s only one such computer in the universe, it is still only a subset of that universe. How could a subset of the universe map all the information in the whole universe?

Would the answer be “42”, by any chance?

That professor has talked quite a bit about building a computer that could simulate the entire universe.

Here is an interview where he discusses the idea in more detail.

I’m totally talking out of my ass here since I know nothing about quantum computing, but it sounds similar to saying that the number pi contains in its decimal expansion the code for every possible universe.

If we assume that a finite number of digits can given the location and momentum of every quark up to the limits of Heisenberg. Then we can string these all together into a finite list of digits, and this finite list of digits occurs somewhere in the expansion of pi. Kind of neat in a late night pot induced BS session kind of way, but utterly meaningless in practice.

These things also tend to stem from “Well, we got a 1-bit quantum computer to work for 19 milliseconds on a slide over in the nucleonics lab, so this Qubit thing is just a matter of funding, now!”-think.

But he didn’t. He talked a lot about the amount of information, and the evolution of it, and how this could be viewed as a computation - but not as a computer - not unless you treat the universe as a whole as the computer. He wasn’t talking about simulating.

He did come up with the number 10[sup]120[/sup] as the number of bits of information in the universe. I notice that that isn’t that far off 2[sup]300[/sup]. Whether he is trying to say that the superposition of all possible 300 qbit states maps to the set of all possible bits of information in the the universe, well I don’t know. But it is a connection.

Zip files.

This is basically it. The reasoning is that quantum states are exponential in the number of systems (qubits), so writing a 300-qubit state down would need more information than the universe contains. But the question the other way around is less clear: does such a quantum state contain ‘all the information within the universe and then some’?

I think the most straightforward answer to this is: no. The reason is that to retrieve information from a quantum state, you have to perform a measurement, which yields one of a large number of possible outcomes; but then, in a sense, the information about those other outcomes is lost, i.e. re-measuring that quantum state only give the same answer you already know. There’s in fact a famous theorem due to Alexander Holevo that basically says that the maximum information you can retrieve from an n-qubit state is n classical bits.

And those n retrievable bits have indeterminate value until you actually retrieve them? How is that going to be the solution you were seeking to any actual problem?

That’s something I don’t get about quantum computing, that isn’t clear from all the “lay” explanations I see. How do you actually get any output and know if the output is the actual answer you’re looking for unless you have additional information at your disposal by which to measure or filter the output you got?

I am reminded of the fully laced IBM card ( photo ) which, according to the traditional joke, contained all the information that any 80x12 bit matrix could have. All you need is a separate mask to lay over the card, with some combination of holes punched, to read out the information from the fully laced card. But then, all the information is actually in that mask, not in the laced card.

All the “lay” explanations I’ve seen of quantum computing seem like that.

Quantum computing is generally treated as a “probabilistic complexity class”. Unlike most programs people are used to, which are deterministic, quantum computers end with a measurement which collapses the superposition. For some algorithms, there are provable guarantees that given a quantum computer with a certain number of qubits*, it will return a false positive or false negative with a certain probability.

An answer can then be gotten with something known as amplification, which is running the algorithm many times and (for most classes) taking majority vote on the answer. Quantum complexity classes that are considered “feasible” are ones where given a certain number of qubits, you only need a relatively small number of runs to have an extremely high probability (usually >99%) of having the correct answer.

In fact, this is exactly how more classical randomized complexity classes work as well, but it turns out that with quantum bits, the same probability bounds are (probably, we’re not 100% sure) a lot more powerful than they are with just random 1s and 0s.

  • Related to the size of the input in most cases

I think it’s an overstatement. There’s a huge difference between “internally representing a number of states that matches the amount of information in the [observable] universe” and “modeling the universe.” Plus as usual, they leave out the all-important “observable” qualification. The observable universe is just the lower bound on size, and the upper bound is infinite.

Also, note that all these internal states would be inaccessible, except for the single resolution (collapsed) state, so what use is it?

Furthermore, modeling the universe implies we have all the pertinent initial conditions, and that it’s deterministic. Two big fails, right there!

Finally, my guess is that the 2^120 is a pretty wild-assed scientific guess. How can we know how much information it contains when most of it is something that we don’t even know what it is? It might be a useful lower bound, at best.

Yes, they are possible and one has already been made. It was pathetic in its abilities but got us past the proof of concept stage. Scaling it up however is quite tricky (some think it might be impossible).