Will quantum computers handle fractals?

I got excited about the power/potential of quantum computers, but got a bit depressed when I learned that they don’t handle algorithmic computations better than standard computers. It seems that calculating pi wouldn’t take advantage of a QC. A QC primarily deals with selecting one result out of an incredible sea of possibilities. I render fractals into imagery (for example, these images), which takes enormous computing power. All CPUs of my 4-CPU computer are often running 24/7 rendering fractal data to imagery and video. In general, the more calculation, the higher the quality and resolution of fractal images), e.g. rendering a breathtaking 8k video (7680 x 4320) takes exactly 16 times the rendering time as a 1080p video (1920 x 1080).

I know this board isn’t a physics university, but I saw a thread on quantum computers here and you guys are pretty wise. Could anyone take a basic stab at whether quantum computers will ever be able to render fractal data (or just general algorithmic data) with more power than standard computers? Maybe some basic information about quantum computers that may apply to the situation would be helpful. I can find practically nothing of this on the web. Speculation would be great.

Thanks for any replies!!! Other people should find them useful, too.

I’m not a quantum physicist, but I am familiar with computational complexity theory, and have some knowledge about quantum complexity classes. My answer is no.

The issue you’re describing is a matter of throughput, not complexity. It is inherently parallelizable throughput, but still a throughput issue. A fractal is just a really, really long series of computations. Bigger resolutions mean more computations have to be done, but bigger resolutions can also be computed in parallel chunks by farming different parts of the image off to multiple cores (though some of this depends on how sequential your fractal generation technique is).

This is just a lot of computation, but nothing is really unknown. You’re never enumerating possibilities and guessing their candidacy, everything is known a priori.

What quantum computers theoretically excel at are certain variations of classes of problems known as “promise classes”. One of the classic “promise classes” is BPP or “Bounded-error Probabilistic Polynomial time”, that is, the set of problems such that, given a question, you can check very quickly whether:

  1. If the answer is ‘yes’ then at least 2/3 of all possible computation paths accept.
  2. If the answer is ‘no’ then at most 1/3 of all possible computation paths accept.

In fact, BPP is contained in BQP, which is one of the simpler quantum classes. Meaning a quantum computer can solve all BPP problems in polynomial time with error no more than a 1 in 3 chance of making an error. Or, more simply, a quantum computer can solve these problems quickly.

Technically, this means BQP can solve P problems, like fractal generation, as well. However, it won’t be any faster at this. This is because any computer that can solve a higher complexity class problem generally solves a less complex problem by merely simulating a less powerful machine. (There are a few exceptions here, I think)

Basically quantum computers aren’t inherently any “faster” it’s just that they can solve more types of problems. The reason we may describe them as “faster” is that our current computers can solve these problems, but they do it by simulating a more powerful machine. Essentially, our current computers enumerate possibilities, guess, and check whereas a more powerful machine could simply try all guesses at once (I’m being imprecise here, but that’s the gist). This is extremely, extremely slow.

In fact, almost all of these classes I’m describing are defined in terms of being able to solve a certain class of problem as fast as a current computer. BQP is effectively saying “a machine that can solve BQP can solve quantum problems as fast as a normal computer solves non-quantum problems”. It makes no claims about how fast it can solve problems that a normal computer already can solve.

(I wrote this before I saw Jragon’s post, so some of the bits are repeated, but hopefully it’ll provide a slightly different perspective)

Probably not.

Quantum computers are good at problems that can be formulated something like the following:

  1. Take an N-bit configuration as input and perform a classical computation on it
  2. Perform this computation in parallel for all N-bit configurations separately
  3. Perform a quantum operation on the bit vector for which the “successful” configurations become more probable than the others
  4. Collapse the quantum configuration and select exactly one classical configuration from it (called wavefunction collapse).
  5. Hope that you got lucky and that the configuration you selected is the one you wanted. If not, repeat the process.

You’re probably aware that quantum computers are fast at factoring large numbers. In simple terms, this is because they can try all divisors simultaneously, and there’s a way to make the right divisor show up more often than the others in the result. The right answer ends up being fairly likely to be the one you see on the wavefunction collapse.

Unfortunately, none of this really applies to fractals. Computing fractals is a highly parallel operation, but you need to keep all of the answers (one for each pixel for each frame). It’s only problems where you throw away most of the intermediate data that QCs have a hope of a speedup. And even then, step 3 above is a tricky point–not all problems are amenable to it, and if you don’t do it then you effectively get random answers out.

Based on the descriptions, I’d wonder if there’d be a way to use it to determine the average color for a single once a fractal hits the sub-pixel range?

Thanks for your replies… bit depressing, lol :slight_smile:

Stab in the dark, only thing I can think of.
What if we see the problem in terms of selecting the best fractal image out of many possible ones? You can spend 80 years rendering a 4 x 3 pixel fractal image, because you want the best quality image. But there are only so many 12 pixel images. What if a QC tries out all the possible images that look generally like we want (all sorts of previews), and picks the best? Perhaps it could use existing data of what fractals look high-rez to people to check against whether the potential fractals are high-quality or desirable?

The thing is, to me there seems an isomorphism between exponential fractal complexity, and the selection of exponential possibilities. I read that selecting the best possible route out of all routes on a map is a good QC problem. You have street A then street B or C, then B to D or E or C to F or G, etc. This is similar to a fractal, where each general area breaks into a series of sub-areas, and then all those sub-areas each to another set of sub areas. How is the problem of navigating a tree-like map of data of routes so different from this tree-like calculation of fractals? Another metaphor is 7 degrees of separation, where you’re connected to most anybody on the planet if you consider your friends’ friends, etc.

Here are a few links that might relate but I barely understand them.
Measurement Based Quantum Computation on Fractal Lattices
Spacetime May Have Fractal Properties on a Quantum Scale
Applied Quantum Fractal Field Theory
Natural Computing (Wikipedia Article) - lede mentions “fractal geometry” and “quantum computing”

Thanks!!!

Only if you’re using a terrible algorithm to render it, or you’re in an exceptionally pathological point in the fractal (in which case you can probably find some trick to do it by hand).

Skeptical about this. You can continue to render iterations of many fractals (e.g. Mandelbrot set) literally forever. You can display that data on an image by using it to calculate what pixels should be what colors to display the data. If you don’t have the data, how are you supposed to calculate what the pixels should look like for a high-iteration fractal?

I’m intimate with 2 basic fractal programs. Generally as you increase the iterations, and certainly as you increase the image size, it takes more time. Days, weeks, months, or forever. I render some single images for weeks to get a high-quality fractal; these images are extrarodinary as if a photo taken with the best possible camera or beyond. How would a fractal know when it hits the maximum displayable quality for a certain image size? Are you saying that when the image stops improving (i.e. remains the exact same even after days of computation), that it’s time to stop, knowing that no more calculation will change the image? How can you be sure an infinitesimal calculation long into a render won’t finally change a pixel just a tiny bit? You can even have a scenario where even a 4 x 3 image doeskeep changing; maybe the pixels keep adjusting themselves continuously as the fractal renders. Then you certainly wouldn’t be able to ever stop.

So if I render an 8k image at indefinitely higher iterations and keep shrinking down to 16 x 9 to get the highest-quality smaller image, I should just not bother rendering any more once the 16 x 9 image seems to stay the same for a long period?

Honestly I’m finding the idea that QCs won’t calculate fractals better very difficult to process. I’ve studied Seth Lloyd’s book Programming the Universe (who designed one of the first quantum computers), about information theory, and how elementary particles can store astronomically more data than we use them for. He says the “ultimate laptop” is a computer the size/weight of a standard laptop but of which every single particle is put to maximum computation use. It can perform “ten million billion billion billion billion billion” logical operations per second on “ten thousand billion billion billion” bits.

More computation power and more storage space sounds to me like just an all-around faster, better computer. So this computer would only calculate certain things better, not everything a normal computer can do? He says that the universe is a running quantum computer, equations calculating out the complexities of galaxies and human life from simple principles repeating (like a fractal?). He says as we use elementary particles more and more for quantum computations, our computations emulate reality so that there’s actually no difference between reality and quantum reality. Can’t we just build a million quantum computers in a tiny space and have them all calculate fractals and register their data on tiny particles?

Thanks for any answers. I’m sure others will find them very interesting as well. There’s almost nothing on the net on this, it seems mystical…

There’s just no way around the fact that at the end of a quantum calculation, you only get one answer. You can do a lot of stuff in parallel, but you can only peek at one of those and hope it’s the right one.

Selecting the best fractal from all possible set of then is in some sense possible, but you would need a quantum algorithm for taking “good looking” configurations and making them more probable, so that the final collapse will select one of them. It’s not obvious what that algorithm would look like.

One thing that quantum computers would be very good at is physical simulation. The universe is quantum and we can simulate it on a QC.

3D rendering would be amenable to QC, but only in a few particular ways. It would not enable you to render current stuff any faster. But it would allow you to get all of the detailed quantum stuff exactly right, such as the probability density of photons after they pass through a two-slit device. All the wave stuff like diffraction and interference patterns comes for free. Basically no one tries to get this right today, since it’s way too expensive, but a proper QC could do it efficiently.

But which will get faster and cheaper, QCs or classical computers? If QCs are cheaper, can’t we just build more of them and hence have more calculation?
If we can simulate the universe (which Seth Lloyd agrees with), why can’t we simulate lots of computers and let them calculate?

Seth Lloyd quote: “In order to simulate even a tiny piece of the universe, for a tiny fraction of a second, a conventional computer would need more memory space than there are atoms in the universe as a whole, and would take more time to complete the task than the current age of the universe”, as opposed to quantum computers which can do this type of thing more efficiently.
So why can’t we just simulate lots of classical computers?

Again, it’s because you only get one answer out at the end. You can simulate a bunch of classical computers. In fact, you don’t even have to go that far–QCs are not that far off from classical ones at the gate level (aside from the issue of reversibility), so you’re pretty much already simulating a bunch of classical computers.

But it’s all pointless if you need all the possible answers instead of just one of them. Likewise if there’s no way to preferentially pick a good answer with high probability.

Lloyd’s quote refers to the fact that I mentioned above: that because the universe is quantum, it requires a QC to simulate it efficiently. The domain of non-quantum problems that see a speedup on a QC is very small (or zero, if you take the approach that Jragon did above).

To clear up another possible point of confusion:

A quantum computer with N qubits and M quantum gates will always be at least as good as a classical computer with N bits and M classical gates (at the same clock rate).

It is remotely possible that one day all computers will be quantum, and when we want to solve a classical problem then we just throw away most of the potential horsepower. However, this is extremely unlikely since quantum computers are delicate creatures, and so far we don’t know how to build them with more than a handful of qubits. For the foreseeable future, classical computers will have bit/gate counts that are many orders of magnitude higher than QCs, and so it’s only on quantum problems that QCs are superior.

You keep on assuming that classical computers are really slow at generating fractals. This just plain isn’t true, and any conclusions you build from that assumption will be likewise flawed. You only need to iterate a fractal until you determine the fate of the point, and that usually doesn’t take very long at all. A thousand-iteration fractal won’t take any longer than a ten-iteration fractal, if all of the points were resolved after seven iterations. OK, so if you’re near the boundary, a few points will take longer than that… but only a few, and they probably won’t take very much longer.

This may be due to a misconception of what a computer-generated image of a fractal represents. I’m not sure what people think the pictures are of, but it’s not obvious to most people that what it represents is whether an iterated computation diverges or remains close to the origin. The simple line drawings provide that breakdown of “stays home” vs. “goes off to infinity”, while the ones with lots and lots of colors simply show you how many iterations is takes for the calculation to start to move away significantly from the origin. So when people say “well, you have to do so many iterations to get the answer”, that’s really only true for very specific points very close to the boundary of the domains, and for most points all those calculations aren’t required. It’s only as you magnify and go deeper into the patterns formed by the boundaries that you have to have higher and higher levels of iterations to determine what color to make each pixel.

The only way I can think of to get anywhere close is the problem “there exists an iteration, k, at which this pixel is observed to diverge” But that’s NP and NP is not (known to be) contained in BQP, so quantum computing is irrelevant.

Actually, wait. Just spitballing:

There exists an iteration, k, at which this pixel is observed to diverge

has an interesting property, in that any iteration, j, such that j>k, the value at j should also be observed to diverge. Since the number of possible iterations is the natural numbers, we can say that at least 2/3ds of all computation paths accept when true (since for any k, there are infinite numbers greater than k and finite fewer, because our smallest is 1), and likewise no more than one third accept when false (in fact, zero accept when false), so this problem is also in BPP, meaning it’s also in BQP. BQP may be able to guess and check the number of iterations it takes to observe divergence.

What is a fractal? Fractions? My computer can do fractions pretty easily

http://fractalfoundation.org/resources/what-are-fractals/

… buuuuuuuuuuuuuuuuuuut

it still has to carry out those computations on that computation path. So nevermind.

The quality of a fractal image generally improves with the amount of computing power put into the render. It’s not slow or fast at any particular level, it’s just something you always want more of like any other reason we want faster and faster computers. I could render a single image for 10 years easy on my computer just to get X level of quality, or 300 years for one second of video, or for a 90 min movie of X quality, 162,000 years, etcetc. So, when I face such decisions of what quality of image/video to render, the idea of having exponentially faster computers is enticing.

I was just under the impression that quantum computers were just going to be ultrafast computers. Lloyd has had me mesmerized with all this hype, talking of computers that can perform “ten million billion billion billion billion billion” logical operations per second on “ten thousand billion billion billion” bits. There’s absolutely no negative talk of all these endless things that quantum computers CAN’T do. It just keeps saying they’re absolutely superior at things and astronomically more efficient. Only on the web did I find anything of these limitations. I’m sure many other people would get the same confusion reading this book and then finding out that quantum computers are only good for some purposes.

There’s just some type of strong isomorphism between the exponential advantage that quantum computers have over classical computers for quantum tasks, and the exponentially advanced things we could do with fractals if rendering speeds also were to improve in this exponential way. For instance, take this 162,000 year-render 90-min 2-D movie. What if we wanted to make it 3-D? Now for every pixel into the third dimension, we’d need another 162,000 years! For just 8,000 pixels into the Z dimension, we’d need almost 1.3 billion years. All this for one 90-min 3-D fractal movie.

The rate at which silicon chips are speeding up is decreasing drastically. Lloyd talks as if the solution to this is to switch to quantum computers. Maybe I’m just reading him way too optimistically.

But the point of constantly increasing 3-D rendering power is to emulate/duplicate/replicate the human world, which we seem to obsess at. If we are ultimately to duplicate human beings in a digital setting that’s completely isomorphic with the real world, would this be the point where the quantum computing has completely taken over? Or would we better replicate humans or intelligent life on a fusion of both quantum and classical computers? Or even biocomputers?