Stephen Wolfram's Theory on the Nature of Reality

I’ve just come across an interesting articlein which Stephen Wolfram puts forth a model of the universe where everything is explained as the application of simple abstract rules to abstract elements, which generates complex objects showing the properties and behavior we associate with our day-to-day reality. Stephen Wolfram is a British-American computer scientist, physicist, and businessman.

I’ve been fast reading about 60% of the article and it is fascinating - a wonderful way to start another quarantine day at my residence - everybody gets to enjoy his or her ivory tower these days.

Anyway, I think other people may enjoy this article as well.

Thanks, that looks fascinating!

Yes, I have finished reading it. There is a lot of interesting stuff to ruminate on. :slight_smile:

I read the book, most of it anyway. I think the general scientific community views it as a bit whacko. It’s certainly true that cellular automata is a fascinating field of study, and they are computationally very interesting (eg. some are Turing-complete, some exhibit interesting pseudo-chaotic phenomena, etc). But his extension of those things to the thesis that most of nature can be explained by cellular automata is probably going too far.

I read that article a couple days ago, although a bit reluctantly due to Wolfram’s ego and history of making these kinds of claims (e.g. A New Kind of Science).

It’s interesting, but it all boils down to an idea that now requires a random search of a massive space to see if the solution can be found (if at all). It’s a valid search method for some problems, but again no guarantee of finding anything in that massive space.

I couldn’t get past the first few paragraphs. Too much ego stroking for me.

I’m not excited enough to even read a summary. His A New Kind of Science really turned me off. He got quite a bit wrong about Computer Science. I assume it’s like the adage “Everything you read in the newspapers is absolutely true except for the rare story of which you happen to have firsthand knowledge.” except with regard to expertise. And that pathetic “random” number generator automata is a complete embarrassment.

So a guy with a big ego who doesn’t understand the fields he’s writing about but it is all New and Revolutionary.

Nah.

It is misleading to say he doesn’t understand computers. He is one of the world’s wonders at programming. It is said that the average programmer produces 7 lines of commented, debugged code a day and Wofram produces 1000. But I read “A New Kind of Science” and I agree it is wacky. On the thread about John Conway’s death how Conway (not Wolfram) used the game of Life (a typical cellular automaton to demonstrate the existence of a self-reproducing automaton. But he never claimed the world was one, only that it could have been. Although wouldn’t that make it deterministic.

Anyway, I was thinking about this just the other day and wondering how this new kind of science was faring.

A. I said Computer Science, not computer programming. It’s like confusing Mathematics with addition.

B. Anyone producing 1000 lines of code a day (which I seriously doubt he is doing, btw) is a terrible programmer. No question about it. Great programmers write small programs.

As a Computer Science prof. I regular saw code produced by people who could grind out a lot of code. It was terrible. Just absolutely awful, unmaintainable stuff. At least 10 times longer and more complicated than needed.

Do you have any examples to back this up? He’s clearly a smart guy, but that doesn’t make him one of the world’s wonders at programming just because he’s smart.

Number of lines of code doesn’t tell you anything whatsoever about the quality of the systems created.

It is entirely possible that he is so good he can churn out high quality systems rapidly, but we would need to see the actual code to determine that.

He wrote Mathematica mostly by himself. And I do understand the difference between computer science and programming but he was in a CS dept (although he founded his on center for complex studies) at U. IL.

At some level of philosophical thinking, how is it all that different from Quantum Physics? A whole micro-universe exist there, where everything works vastly different from the way we conventionally see the world, and everything that we see around us is just the emergent manifestations of that.

Or string theory, for that matter.

Quantum mechanics focuses matter and energy, their physical properties and the laws that govern their behavior, whereas Stephen Wolfram’s theory proposes a mechanism through which existence and its development can be explained regardless of its physical contents. It is a kind of updated Pythagoreanism, where reality, its structure and functioning all stem from mathematical/logical relationships and their objective progression.

Probably due to his unorthodox approach and lack of experimental results, Stephen Wolfram’s theory has been met with indifference by leading scientists so far.

However, Andrew Strominger (an American theoretical physicist who is the director of Harvard’s Center for the Fundamental Laws of Nature) thinks physics has reached a point where new concepts and instruments are necessary to solve persistent problems in this field. “Stephen is addressing these issues with a radically new approach. It has been stimulating to discuss these issues with him, and I am excited to see where it will lead.” he declares.

I’ve never used Mathematica, but everything I’ve ever seen or read about is has been very positive, it seems like he did a very nice job. But there are tons of people across the planet doing a “very nice job” of designing and creating high quality software. Of course there are 1000x more people creating crap.

When you say “one of the worlds wonders”, that seems to shift him from the 0.1% doing very good work to the 0.00001% that are doing things others are not capable of doing. It’s possible he’s in that category, but I don’t think Mathematica would be the evidence of that, or producing 1,000 lines of code in a day.
An example of people I think do belong in that category are: Hinton, Bengio, LeCunn. Neural network progress was mostly stalled because there was no efficient method of training deep networks due to increasing error as you go back in the network. That group solved a problem that others had not been able to solve up to that point.

That’s the type of think I think of when I hear something like “worlds wonders”.

I have definitely written 1000 lines in a day of perfectly reasonable code and, generally, my implementations are smaller than everyone else’s. (I’ve also written poor code, but usually that’s because I’m writing something for myself at home, with no need for future maintenance.)

It’s like they say with child geniuses who go to university at an early age - it doesn’t say anything about how far they actually go in the profession. Similarly, being able to code fast doesn’t say anything about how well you can code. Most developers are slow and most developers write horrible code. If you speed them up, most will still write horrible code. There’s no correlation.

I’m not Wolfram but, apparently, I’m also a fast programmer who has used cellular automata as a model for considering the universe.

Personally, I think of the cellular automata as a way of thinking about C and why it is limited. Ultimately, no matter how you optimize your glider, the basic rules of the program simply don’t allow information to percolate through space any faster than 1 pixel per update. Your glider can never go faster than that, there are only less efficient gliders that move data around slower - and they are in the vast majority, particularly among larger structures.

While there’s nothing to stop the universe from rising out of a cellular automata, I’d say that it’s more likely that someone developed a quantum simulator with the express purpose of researching quantum effects at large scales. They would have known that life was likely to spontaneously come into existence and decided that there wasn’t anything to be done about it. And, as such, it seems unlikely that you would do so for shits and giggles, and leave it running - implying an active intent to have results like the universe we live in and, thereby, a custom simulator.

Of course, who knows with a superuniversal tween given free access to a gigantic quantum computer.

I read that article a few days ago, and bits of the technical papers. I think it’s a bit misleading to lump this in with ideas like 't Hooft’s cellular automaton interpretation of quantum mechanics, which necessarily struggles with things like Bell inequality violations, having to appeal to ‘superdeterminism’, which is at the very least a controversial notion, or the approach of D’Ariano and collaborators.

I’d rather lump it in with what I call ‘ensemble theories of everything’, something like Tegmark’s mathematical universe, Schmidhuber’salgorithmic theories of everything, or Standish’s theory of nothing: it essentially gives a particular parametrization of ‘everything’ in terms of a broad class of computational rules, and then follows a two-pronged approach to get out the universe—first, appeal to generic properties to get the basic form of the laws, and then, try to single out a given realization corresponding to our particular universe (although he does hint that this latter course might end up collapsing to the former, as well, with the appeal to the use of all possible rules in parallel).

I’m of two minds regarding how wowed I should be by the successes of the approach (which I can’t claim to have exhaustively evaluated). In one sense, they may simply tell us that the gross properties of quantum mechanics and relativity are pretty generic—it might be harder to make something that looks even remotely like a universe without having these features (indeed, in a sense, one should be more surprised if that weren’t the case—for then, how did our universe get chosen to obey these particular laws?).

Furthermore, with now having to figure out which particular rule should give rise to our universe, it seems to me he’s running into a familiar problem—see also, string theory’s 10[sup]500[/sup] possible compactification, one of which might give rise to something like our universe, or QFT’s possible gauge groups, and so on—so in the end, perhaps what he’s done is just figure out a sufficiently general language to talk about very many kinds of physical laws, with the real input now being required in the form of the specific form of the rule.

But of course, even having different formalisms to approach familiar problems is something worthwhile, and may lead to new breakthroughs. As it is, his work is certainly impressive—not least because of the sheer volume of it—but whether the approach has any actual promise remains to be seen.

Thank you for the review, HMHW. I had no idea formal theories of everything existed - excellent reads for this year’s vacation on the terrace.