Physicist Types: Information, as a Dimension of the Universe?

If this has been discussed and my search-fu was weak, sorry.

In this threadin Cafe Society, I talk about James Gleick’s new book, The Information.

As a last post, I attempt to summarize the book, and compare it to Seth Lloyd’s Programming the Universe, this way:

With that in mind:

  • What is the current state of thinking about the role of Information as part of the “fabric” of the Universe? Does “information” as a dimension of…reality? meaning? matter the same way that energy/mass and other physical properties do?
  • Is the relationship between the physical and Information defined by what happens at the quantum level? Meaning: because matter exists as states/probabilistic fields at the quantum level, it is hard to understand and interpret using the language and tools of physics. It is better-suited to the language and tools of Information Theory?
  • Could the next Big Thing - a Grand Unified Theory or a different step beyond Relativity - come about through looking at our Universe/Reality from an Information Theory perspective?

I hope I am asking questions that make sense and are worth responding to - just trying to wrap my brain around the emergence of Information Theory as a science and way to look at the world around us.

I’ll try to answer these questions to the best of my knowledge, as a grad student in cosmology, of the state of the field vis a vis what is and isn’t accepted.

Information is certainly regarded as an important factor, but as far as I know no one is saying that it has tangible reality, or the status of a physical dimension (i.e. it’s not on the same level as space and time). It would be more analogous to physical properties like temperature, pressure, and entropy. Indeed, entropy can be interpreted as being an informational property.

I don’t know that there’s a hard and fast line between the laws of quantum thermodynamics and those of information theory. Some amount of information is present in any arrangement of matter and energy, and you can either encode that as a physics thing or as an informational thing. Certainly there’s been some level of borrowing from information theory in theoretical physics discussions.

It’s entirely possible. One intriguing idea that may yet become an integral part of a GUT or quantum gravity theory is the holographic principle. It states, loosely, that all the events and information happening in a four-dimensional spacetime can be encoded in a three-dimensional surface. Look how important the informational concept of encoding is to this theory. There’s another instance of it with black holes: that all the entropy of a black hole’s volume is contained in the event horizon, its outer area. Same idea. Short answer: maybe.

Or, perhaps, that four-dimensional spacetime is merely a computationally-efficient way for our brains to model three-dimensional reality … .

not being a physicist type, I cannot help but noticing that each next real or alleged “next big thing” is ever more rarefied, hard to understand and of little relevance to anybody other than astronomers or some hard-to-understand, if very useful, gizmos.

The big thing of Galileo made him a popular icon because of sheer simplicity. Newton’s was widely appreciated if not necessarily understood by everybody. (Maxwell gets skipped :slight_smile: ). Einstein’s made him a popular icon precisely because of incomprehensibility (it also gives us GPS). Whoever came up with quantum theory does not even register in popular culture, although the insights achieved proved essential in electronics. String theory seems to be not just hard to understand but also at present non-verifiable, so no gizmos are coming out of there.

See the trend? Even if some incredible insight is achieved through name-your-theory, it most likely will not be appreciated by anybody other than a closely knit coterie of grad students and their professors. For the rest of us, it will always stay on the “yeah, eggheads are talking incomprehensible equations” level of understanding. And if no worm hole drive or other such miracles originate from any of this, most people will not even bother noticing if those equations are “string theory” or “information theory” or what have you.

The first two examples that pop into my head of technology based on information-theoretic insights are cell phones and hard drives. I think that’s a pretty good reason to regard it as practical.

I believe the OP was talking about “big theory of everything” type of insight into the physical universe.

By way of analogy, maybe tomorrow somebody comes up with a way to use a rat’s brain to build a “storage device” of incredible power and usefulness. But that wouldn’t mean that a “big theory of everything” has just originated from neuroscience. It would just mean that neuroscience gave us a very clever, useful hack to play with.

Quoth code_grey:

There wasn’t any single mind behind quantum mechanics, but rather a whole slew of individuals who did their part. The biggest contributions were made by Schrödinger and Heisenberg, but there were also significant advances made by de Broglie, Planck, Fermi, Einstein, Dirac, Bose, and many others.

Would inaccurate information be a dimension?

The TV series Breaking Bad features a character known as Heisenberg.

Only Einstein registers with popular culture in a meaningful way. And Einstein is not known for quantum mechanics. Schrödinger is a blip only because he had the foresight to use a cat in his thought experiment. If he had use something less cute only people with an interest in science would have heard of him.

OK, I’ll take your word on that. Most of those are known to certain subcultures of which I’m a member, but I don’t know too much about what constitutes mainstream popular culture.

Oh, and I knew there was a name I was missing. Add Pauli to that list, too. Yeah, folks probably haven’t heard of him, either, it just seems unjust to leave him off of it.

Born, Bohm and Wigner too and not forgetting Von Neumann, who in many ways was what Einstein was to special relativity (i.e. he gave quantum mechanics an axiomatic basis).

interestingly, I just realized that in my survey of the “next big things” I completely missed the work of 18th century French mathematicians on mechanics. Which, according to my very hazy understanding, may be much more relevant to the actual practice of modern mechanical and structural engineering than Newton himself.

But that just goes to show, once again, that after a certain point “next big things” are hype driven and do not have profound significance to the lay reader. While everybody may be talking about Einstein or about somebody making big claims for information theory, do they care about the breakthroughs of Laplace, Lagrange et al?

Overall it just feels like “keeping up with celebrity news, the college grads edition” rather than a serious effort to understand non-trivial new aspects of the universe. Then again, if those new and non-trivial aspects of the universe are not really understandable to the vast majority of college grads (code_grey among them), then what’s wrong with that.

…and yet “e=mc^2” is held up for its elegance and simplicity.

I am not sure why you are holding humanity to task for the explanations. In science, a “theory” is the set of rules that produce replicatable conclusions that best fit reality. These theories evolve as our ability to perceive reality and conceive of rules worth testing for their fit with that reality evolve or emerge.

…and the accumulation of innovation and applicability that Humanity has acquired over the years has been driven by the pursuit of deeper understanding of the universe around us - even if the usefulness wasn’t initially apparent. But figuring this stuff out? Please, there would be tons of applications to computer architectures, energy-creation approaches and many other fields.

Thanks to **Spatial Rift **and others speaking to the OP. I am familiar with holographic theory - that is the one that potentially offers a new theory for what gravity is, correct?

In reading The Information, I love the exploration of Set theory and randomness. In both cases, it is shown how Info Theory helped to provide clarity on the fundamental incomprehensibility of our system. Godel’s proof about the paradox of sets and the difficulty to assess random numbers have far-ranging implications in a variety of fields - I enjoy pondering the philosophical implications.

There’s of course different people saying different things from different viewpoints, but one that has a certain appeal to me is not to view information as a ‘part’ of the ‘fabric’ of the universe in the same way that time, space, energy or mass may be a part of it, but rather view reality as a way of looking at or representing information, in some sense. I think it was Claude Shannon (see: father of information theory) who said that information is ‘any difference that makes a difference’. To make this somewhat more exact, think about a ball that can be either red or green. You can use this to ‘represent’ one bit of information, the same way one might use the digits ‘1’ and ‘0’. Or, take two cars of absolutely identical make and model, one of which is red, one of which is green. Again, with either of these cars, you can represent the value of one bit of information.

Now, think about an object, that can come both in different shapes as well as different colors: say, it can be a cube or a ball; and it can be green or red. This can be used to encode two bits of information, with the states ‘cube, green’, ‘cube, red’, ‘ball, green’ and ‘ball, red’ corresponding to the bit strings 00, 01, 10, 11, for instance. So, any property that can be used to uniquely differentiate between two objects otherwise identical in all other respects can be used to represent one bit of information. This is, in fact, how one can encode information in the physical world.

But this reasoning can be inverted: in a sense, one may say that it’s the difference in information content that distinguishes one object from the other, making thus information, rather than physical properties, the primary quantity. That is, it’s not that the difference between the red ball and the green cube can be used to store information, but that this difference results from the difference in information content. In this way, then, one can view every object, and by extension, the whole of the universe, as a certain representation of information, that just replaces more familiar bit strings, ‘001011101001…’, with a list of properties, ‘red, cubic, heavy, big…’.

Up to this point, we haven’t really gotten anything out of the whole deal – it essentially just involves a change of viewpoint; both descriptions are dual to each other. However, information is something we understand very well, from a theoretical standpoint; so, thinking about the world in this kind of framework means that we get to direct this theoretical machinery towards the understanding of physical reality, which may conceivably yield new insights.

There’s been several interesting developments in this field relatively recently, but this post is already growing fast, and I’ve made the experience that when I get too excited and type out these long posts, the thread tends not to continue on for too long… However, one thing that has received some exposition in the scientific press is Erik Verlinde’s derivation of (Newtonian) gravity from black hole thermodynamics (which, as has been mentioned above, includes the essential notion that the information ‘stored’ within a black hole is proportional to the area of its horizon). There’s actually an older argument by Ted Jacobson that does something similar for general relativity, but Verlinde’s framework explicitly contains the notion that gravity may be an entropic force, where entropy is an essentially information theoretic concept – roughly, one could say that gravity is a direct consequence of the way information is represented in the universe. (If you’re interested in reading more about this idea, Johannes Koelmann has discussed it extensively on his blog.)

I should caution, however, that despite having generated quite some buzz, this isn’t a mainstream scientific explanation – most researchers still take gravity to be a fundamental force.

EDIT: Ah, I missed your last post; you seem to be already familiar with entropic gravity (however, not everybody would agree that holography implies a new theory of gravity…).

All very cool **Half Man Half Wit **- your framing of “what if you started with Information first” is exactly the concept I was trying to get at with my OP. I am both asking “is that ‘valid’?” and “is that a truly new perspective, similar, say to the Relativistic perspective introduced by Poincare and others in the late 1800’s and culminated in Einstein’s theory - i.e., a new approach to Big Question that may yield a whole new theoretical model?”

And you exploration of the holographic stuff - yep that’s what I have read about and was wondering if that might be an example of a new model influenced or even driven by Information Theory.

Oh yeah - Shannon and his Info Theory work is a, if not the, central figure in Gleick’s new book.

Thank you.

One ‘big question’ that information theoretic approaches may shed new light on is the question of why there is anything at all. There’s an interesting book by Russell K. Standish, called ‘Theory of Nothing’, in which he claims that ‘something is just the inside view of nothing’. That sounds like something more fit for eastern philosophy than western science at first brush, but it can be given a rigorous meaning, in terms of information content.

First of all, the information content of nothing is, rather trivially, zero. The information content of something can be measured by how much it can be compressed, using a universal Turing machine (I reckon your reading has introduced you to the concept; if not, just think of it as a sort of formalized definition for a computer, because that’s what it is). In other words, the information content of something – say, a picture, or a bit string – is measured by the smallest amount of data needed to reproduce it faithfully. A highly redundant bit string, such as ‘01010101010101…’ can be very highly compressed, to something like ‘n times 01’, and so can a ‘boring’ picture, like, say, one which just displays a perfectly uniform, red wall.

One thus obtains a measure for complexity (commonly called algorithmic or Kolmogorov complexity). Now, one might think that in some everyday sense more complicated objects also always have a higher complexity in the algorithmic sense, and that, in particular, ‘everything’, formalized in a suitable way, should have the highest complexity possible, as everything includes, well, everything, including every thing that has a high complexity, and should thus have a higher complexity than all of those things.

However, it turns out that in fact, everything has exactly the same complexity, and hence information content, as nothing: zero. You could give me nothing, and from this, I could generate a description of everything. It’s simple: you can create a Turing machine (i.e. write a program) that, requiring no input, outputs all possible binary strings in sequence – 0, 1, 00, 01, 10, 11, 001, 010… – and hence, all information.

Standish uses Borges’ infinite ‘library of Babel’ as an illustrative example: in it, all possible books – in fact, all possible permutations of a certain amount of letters – are stored. So you’ve got everything in there, from the complete works of Shakespeare to a volume containing just the letter q, 150,000 times.

This library, too, has zero information content. But it also contains the work of Shakespeare, so in this toy world, Shakespeare came out of nothing – just as, if our universe is really just information at bottom, it ‘came out of nothing’, in the sense that it is part of all possible ‘bit-strings’ – representations of information – that are derivable from ‘nothing’ in the sense of ‘containing no information at all’.

So, from an information theoretic viewpoint, that something could come from nothing is not as paradoxical as it is commonly taken to be – as in fact, nothing contains (the collection of) all possible somethings, in the same sense that a coded message contains the plain-English message.

This may be taken to be rather more philosophy than science, and I wouldn’t quibble with that characterization; however, once you gun for the big questions, the boundaries always get a little fuzzy.


As for more ‘mundane’ physical approaches, one might point out that essentially all of special relativity can be derived from the postulate that ‘information can’t be transferred faster than light’, and that quantum theory, or at least large patches of it, can be derived from assuming some variation of a ‘principle of finite information’, i.e. that every quantum system only contains a finite amount of information (this is for instance the starting point of Rovelli’s ‘relational interpretation’ and reconstruction of quantum mechanics). The viewpoint has certainly been expressed that both theories are really theories about information underneath.

Regarding the validity of this approach to physics, there is an issue here: conventional physics relies on continuum quantities, which aren’t computable, and many information-based approaches require computable physics. In fact, this is probably the greatest obstacle of ‘digital physics’. A simple example is the apparent loss of symmetry: anything information based will typically have something like a shortest length scale, or a shortest time step. However, in conventional physics, these quantities are typically assumed to be continuous, which produces some nice features. There’s a famous result, called Noether’s theorem, which associates a conserved quantity with every continuous symmetry – concretely, the fact that physics ‘stays the same’ whether or not it’s observed now or any amount of time (in particular, any arbitrarily small amount of time) later leads to the conservation of energy; similarly, spatial translation symmetry leads to the conservation of momentum. A shortest time step, or a smallest length, breaks these symmetries, which may seem too high a price.

There are ways around this: some parts of Noether’s theorem can be salvaged even in a discrete world (there’s been some recent work on this – here’s a blog link), and there’s also a sense in which information can be continuous – take sampling: what you do is take a finite amount of discrete points for which you note the amplitudes, i.e. you turn the analog signal in a digital, discrete one. However, providing certain conditions are met, the analog signal can be arbitrarily well re-created (given enough processing power) thanks to the Shannon-Nyquist theorem. Spacetime may be ‘both discrete and continuous’ in a similar sense.

The good thing about these speculations is that, contrary to many other ‘new physics’ proposals (string theory, I’m looking in your direction), they may not be very far from being testable, or may be testable already – the folks at Fermilab are currently busy building the Holometer, which may actually have a shot at detecting the underlying discreteness of the universe.

That’s what I am talking about. The “philosophical” stuff above your dividing line. I understand Turing Machines (CompSci major back in the day) and the concept of information content. That was a major topic in Gleick’s book, or at least the later chapter on randomness in information - information is defined by the observer; a random number may not be compressible and therefore, in a definitional sense, contain more “information” - and not be useful. I think what you are saying vis a via Standish’ book is, on a purely definitional level, something is not differentiate-able from nothing - you can’t “scan for something” based on patterns or randomness in an information stream. Hence, we come back to the observer’s role.

Hmm, I should check out that book. Is it readable/reasonably well-written for a reasonably-well-educated civilian?

And thank you for the other portion of your thread about underlying discreteness - I kinda almost get that. At some level, it feels like we are still trying to figure out if there is ether…

You shouldn’t have a problem with it, I think. For what little math is used, there’s a primer in the back, where you can refresh anything you might have gotten rusty on, but most of it is written on a popular, but not dumbed down level. There’s quite a few nuggets in that book, from quantum immortality to doomsday arguments to how one can use anthropic reasoning to derive an argument that shows that ants aren’t conscious… So if that kinda stuff is up your alley, I heartily recommend it.

Cool - I just clicked on “please make this available for the Kindle” button and put this on my shopping cart list…