Physicists please help explain: jewel-like geometric object that dramatically simplifies calculation

I was reading this article and was blown away…I think…maybe.

It sounds remarkable…maybe even game changing but it is frankly a bit beyond me to really understand what is happening here.

Can anyone explain it in layman’s terms? Is this a big deal or much ado about not much?

Here is a snippet from the article:

This sort of thing happens from time to time… About fifteen years ago, someone figured out the surface of a many-dimensional ellipsoid that simplified certain kinds of calculations. It really cut down on computing time for some classes of problems.

(I Googled but couldn’t find a reference.)

It’s really nifty, but I fear that the math may be a zillion miles over our heads (most of us, anyway.) Like with the Higgs Boson, I’ll read the popularized science articles and hope to get the faintest glimmer. It definitely sounds wonderful!

This isn’t my field, and I can’t pretend to be up to speed with these developments; but from what little I know, there’s been a kind of revolution regarding the way computations are done in particle physics, and the ‘amplituhedron’ seems to be the culmination, or at least the latest development, in this direction.

So, basically, the classical way you compute the amplitudes—which give you the probabilities—for certain scattering processes, i.e. for certain ways particles interact with one another, is by means of Feynman diagrams, which are often read as symbolic representations of one particular kind of interaction (sometimes they are even presented as depicting ‘what really goes on’, but this is grossly misleading). However, in fact, Feynman diagrams are graphical representations of the terms in a certain kind of approximation to the ‘full’ expression (which in general can’t be calculated exactly), a so-called perturbation series. (The often-mentioned ‘virtual particles’ are things that show up in this approximation, by the way.)

The problem is that the number of these terms grows very quickly with the ‘order’ of the approximation, that is, the better you want your approximation to be, the more diagrams you have to calculate, which in general becomes intractable quickly. Similarly with the number of interacting particles to consider. But in recent years, a couple of techniques have emerged that allow to dramatically shorten these calculations, in a sense doing many Feynman diagrams at once; one is for example the so-called ‘unitarity method’ due to Zvi Bern et al., and another, and I believe more direct precursor to this amplituhedron-thing are the BCFW recursion relations.

The big question is, of course, is this just a nifty trick, or a hint at something more fundamental? I think at the moment, nobody really knows; there is an interesting relation between the calculations using the unitarity method, which are fundamentally quantum field theoretical, and certain calculations in general relativity (it’s claimed that gravity can be seen as a ‘double copy’ of a QFT), which nobody knows how to bring into the mold of quantum field theory, but I’m unsure to what extent this has yet been explored.

What gives me pause regarding this amplituhedron thing is that it’s (apparently) letting go of unitarity (I don’t care so much about locality). First of all, I’m not sure how this is possible; after all, unitarity is build into the very foundations of quantum theory. That you should need to let it go to calculate scattering amplitudes seems strange, at least conceptually. Second, I just can’t see how to make sense of a theory in which unitarity doesn’t hold—probabilities that don’t add up to one, i.e. that don’t satisfy the constraint that something always happens, just don’t really make sense to me. For some reason, however, this kind of speculation has become much more common in recent years; when Hawking proposed that unitarity might be violated in black hole evaporation, it was still sufficiently anathema that people immediately thought the idea couldn’t be right (which lead to much brilliant theoretical work in order to smooth out this flaw, in my view—and likewise Hawking’s, who famously conceded a bet on the matter—succesfully).

One thing I don’t know anything about and would like a passing particle physicist (Pasta?) to comment on is in what sense, if at all, this kind of things can be considered as analogous to the old S-matrix programme; there seems to be at least a kind of conceptual similarity in that the emphasis is placed on the calculation of scattering amplitudes above all else, and featured a similar rejection of the space-time picture (though of course the S-matrix was strictly unitary).

This quote is worth highlighting:

As a word of caution, however, one should add that as of now, this whole amplituhedron thing only works for a very specific theory—so-called (four dimensional) N=4 super-Yang-Mills theory in the planar limit—, which we know doesn’t describe the world, but rather, is just a toy model. The hope is that the procedure might be applicable also to realistic quantum field theories, maybe even string theory, but as of now, I don’t think anything’s known about whether this works (I also have the sneaking suspicion that it’s one of those really nice things that only work if you have supersymmetry, which however seems quite elusive in light of the LHC data, if it is there at all).

According to the article, if space-time as well as both locality and CFD are irrelevant, I’m not sure how that matters.

Well, it matters in so far as up to now, you can only use the technique to calculate scattering amplitudes for particles that don’t exist. And while there’s justified hope that the discovered techniques may generalize towards more realistic models, so far, that’s an open question and certainly a long way down the road (it might for instance work only in supersymetric theories, but reality may turn out to be stubbornly nonsupersymmetric).

(I’m not sure what you mean by CFD, though.)

editting

I was doing battle with the edit window.

Counter-factual definiteness - I wasn’t paying attention to the context, but I get the impression from the sidebar that’s the idea they were trying to convey. Anyway, what I’ve read elsewhere a few times is that when you start with this sort of ‘dimensionless’ conceptual base, concepts like space, time, etc tend to flow naturally from first principles.

As far as I can see, that’s not relevant at all. What’s being computed are scattering amplitudes—basically, you start with a collection of particles, and let them interact to form another collection of particles; from the description of this process, you get the probability (the square of the amplitude) for this process to happen. Nothing else is present in this formulation; however, in order for those amplitudes to describe quantum mechanical processes, they need to fulfill certain constraints that, among other things, lead to locality and unitarity.

The remarkable thing now is that Arkani-Hamed et al. have found a mathematical object, a kind of higher-dimensional polytope, whose volume corresponds to such amplitudes, which automatically satisfy the requirements, and thus, can be interpreted as amplitudes for particle scattering processes in four-dimensional spacetime. Without putting spacetime, unitarity, locality etc. in, they thus get out something that corresponds to processes calculated in a quantum field theory where those things are explicit prerequisites.

The obvious questions are now why this is possible, i.e. whether there’s something more fundamental that allows to recover quantum field theory as some kind of limiting case, and whether this generalizes, i.e. if it’s possible to do the same kind of thing for other, realistic quantum field theories.

My conclusion was based partly on this:

You don’t think that sounds like CFD? It’s not the same thing but that was the closest analog I had when I saw locality.

Well, you can say that the Feynman diagrams show “what’s really going on”, but you have to be careful when you do so to make clear that it’s not any single diagram that’s “real”, but the superposition of all of them at once, with various weightings.

That said, though, using the Feynman diagrams to get actual numbers is a real chore, and in some cases can’t be done at all. If this new technique can work in the cases where Feynman’s methods are impractical or impossible (in particular, for the low-energy color force), that would be very big news.

This bears repeating. The article (which I thought was generally quite good) somewhat buries this point.

For those who are curious, here’s a good article on what N=4 supersymmetric Yang-Mills is.

It sounds like the next step is probably to see if something similar can work in more realistic (but still not real-world) theories like N=8 supergravity. Applying this to a theory that actually describes the real world is a long way off, if it ever comes.

I was puzzled by that, too. Maybe they don’t really mean that unitarity is violated, but it’s just not manifest in the way the calculation is done? If it agrees with the result you’d get from the Feynman diagrams, doesn’t it have to be unitary?

Why do I get the feeling that we are in a world of warcraft simulation and are beginning to work out the source code. I anticipate any day now someone will work out that gravity doesn’t actually work according to the inverse square of distance but a 0x5f3759df like approximation. :smiley:

I’ve since gotten around to watching Arkani-Hamed’s talk at SUSY 2013 (which was pretty good, he explained the concepts really well), and in the present version, everything is indeed perfectly unitary (as it must be, in order to agree with SYM predictions, as you note), but the unitarity isn’t put in as a starting principle. He then draws a comparison to reformulating Newtonian mechanics using action principles, in which (he claims) determinism likewise isn’t put in upfront, but comes out afterwards; this was then later deformed to yield quantum theory, in which determinism is no longer present (again, his reasoning, as best as I am able to reproduce it). So his intuition is that this present reformulation of SYM may similarly be deformed to give a more general theory, in which things like unitarity and locality may no longer be given. (Personally, I’m not exactly floored by this argumentation, I must admit.)

I look forward to the Lego amplituhedron.

Getting to this thread late, so not much to add to what’s been said, particularly by Half Man Half Wit and tim314. It’ll be interesting to see where this goes, but there’s a long road of work ahead (as the authors readily state). It could be months, years, or decades before we know if this avenue of essentially mathematical work leads to anything fundamental or applicable to nature or if it will prove to be a curiosity made possible by the chosen toy model.

There are definitely some strong parallels, but as you have since seen in Arkani-Hamed’s SUSY 2013 talk, the motivations seem somewhat different. He and his colleagues seem to be looking to generalize for generalization’s sake (or, if we eliminate possibly retrospective narratives, they probably saw some interesting patterns in their work with twistors and wanted to see where they led), as opposed to the S-martrix’s goals of dodging renormalization and figuring out how to deal with non-perturbative situations.

But if we leave aside this (to me still somewhat spurious) generalization angle, and this ends up being a way to calculate scattering amplitudes and nothing more, then boy have we arrived at this point via the winding road… From trying to calculate scattering amplitudes from first principles, to hadronic string theory, to bosonic strings after QCD was developed, leading to conformal field theory, then to supersymmetry, supersymmetric Yang-Mills, the twistor string, and back to calculating scattering amplitudes from first principles, and that’s leaving out most of the steps in between. What’s next, re-current algebra (actually that name is so good I’d love for someone to write a paper on it)? The amplituhedron-bootstrap?

Is it just me or does this thread remind anyone else anyone of a mathematical “mornington crescent”? :slight_smile: or a Chris Morris Satire

Yeah, right. about as convincing as “Shatner’s Bassoon” (NSFW)