First of all, ‘nothing’ is very hard to talk about. That is because, whenever we use sentences like ‘x is green’, we are actually proposing an existence statement, i.e. we’re saying ‘there is something that is x, and that something is green’. But when we say things about nothing, then we’re automatically not talking about nothing—when we say, for instance, ‘nothing is very hard to talk about’, then we’re saying ‘there is something that is nothing, and that something is very hard to talk about’. But of course, that’s nonsense: nothing is not something; something is the opposite of nothing!
Now, in contemporary, popular discourse on the matter, people often define something that they want to call ‘nothing’—the vacuum of space, for example (as in ‘particle creation from nothing’ in quantum field theory), or a closed pseudo-Riemannian manifold of zero radius (as in certain universe-creation scenarios), or whatever else. Now, all these things simply aren’t nothing (no matter what Lawrence Krauss tries to sell you). They’re somethings. How do I know? Because they have properties—if it has properties, it’s not nothing, because properties are what makes something a something. It’s not nothing if it’s, say, green—only things can be green. Likewise, it’s not nothing if it has a metric signature, etc.
But then, how do we talk about nothing? I think there is a possibility, which can be brought out by algorithmic information theory (AIT). AIT essentially concerns the question of how much information a given thing has, and approaches it by quantifying how difficult it is to describe—more formally, how long the smallest computer program is that produces that thing. There’s some technical niceties with that definition, which I will however not bore you with; for the moment, let’s just take it as a well-defined quantity that, for a given object, tells us how much information is contained in its description.
Now, let’s suppose that we weren’t interested in nothing, but instead, in everything. What’s the information content of everything? Well, as it turns out, there is a very small program that outputs every possible object—even very complex ones, which on their own can only be output by a very long program.
By analogy, consider a library that contains ‘every book with exactly 100,000 characters’. That single sentence contains all the information needed to, should you choose to do so, create the whole library, book for book. Nevertheless, there are books in it that can’t be described by a something that’s appreciably shorter than the book itself—think just how, for every abbreviated description you give, there will in general be several books differing in just some minute detail which fit the description. So somehow, collecting all those high-information things yields a very low-information thing.
Now suppose you increase the number of allowed letters without bound; still, the description of the whole library will be a very short one, and in fact, the ratio between description and size of the library will tend to zero; thus, the ‘library of every book’ will contain asymptotically zero information.
Now, a fundamental result in AIT is that every set contains as much information as its complement, that is, the set of things not in the original set. For example, if you have a set of apples that are either red or green, you can describe a subset either by ‘all the green apples’, or, ‘all the not-red apples’. If you know how to pick out the red apples, you also know how to pick out the green ones, and vice versa.
But now we know that everything has asymptotically no information, and we have a specification of everything; but then, we also have a specification of the complement of everything. And what’s the complement of everything? Right, nothing. So for the description of nothing, we only need the description of everything. Both contain exactly the same information.
This has an odd consequence for the issue of creationg from nothing: namely, creating something from nothing is the same as deleting something from everything. To illustrate, assume you take one book out of the library of every book. How do you describe the new library, sans the book you took out? Well, you take your original specification of the library, and describe what book you took out, i.e. you say ‘the library of everything, minus book x’. The question is, how do you describe book x?
Well, every consistent way of labelling the books in the infinite library will have entries that have just as much information as the book they are labelling (on average). That means, just from knowing the book’s entry in the library catalogue, you can recreate the entire book. Thus, the description of the library without some book is at the same time a description of the book itself, just as the description of the whole library is also a description of nothing. So describing the whole library with one book missing is the same as describing only the book—deleting the book from the everything library is the same as creating it from nothing.
In fact, whenever you in some way partition the everything library in two, you essentially create information; the everything library on its own does not contain any information, but each part of it does, thus, if you throw away one part, or make it otherwise inaccessible, what you’re left with will have some nonzero information content, which has been created from no information at all.
So, to sum up, talking about nothing is the same as talking about everything; deleting something from everything is the same as creating something from nothing (informationally speaking). The question then is, if I keep on writing like this, and my post ultimately becomes infinitely long—will I have said everything, or nothing? 