Math Dopers... Quaternions? (a primer, please?)

Oh, just one more interesting tidbit: The reason that the sedenions (how’s that for a word of the day?) don’t have multiplicative inverses in general is because they’re not nicely normed. IOW, for anything up to and including the octonions, you have that aa* = (k, 0) for some constant k, and you can define 1/a as k/a*. In the sedenions, that fails.

It would be interesting to see what aa* is equal to in general, but that seems like a lot of work.

I hope you’ll forgive a slight hijack from a math dummy:

One thing I’ve found rather mystifying about quantum field theory is the whole idea that spin 1/2 particles (common fermions) have to be rotated 720[sup]O[/sup] for them to look the same (the interferance properties of an electron, for instance, are only restored after two spatial rotations, suggesting rather bizarre topological properties). I’m not familiar enough with the math, except with my scratching the surface of matrix mechanics, but it seems to me that maybe these spin properties are described by the equivalent of a quaternion, such that if it has certain elements, it describes a fermion or a boson, and that operations representative of rotations yield different results depending on how those elements look after such operations.

You mean define 1/a as a*/k, right?

Actually, I don’t think it’s that bad.

If (a,b)* = (a*, -b) and (a,b)(c,d) = (ac - db*, a*d + cb)

then wouldn’t it just be:

(a,b)(a,b)* = (a,b)(a*,-b) = (aa* - (-b)b*, a*(-b)+a*(b))

Note that when multiplication between a and b is associative, the second component is zero. This can be proven as follows:
a*(-b) + a*(b) = a*(-b + b) = a*(0) = 0

However, without associativity this proof breaks down, and in fact it is no longer the case that the second component must be zero.

You probably don’t want the general quaternions info then. I suggest reading section 5.25 of the comp.graphics.algorithms FAQ. It’s currently managed by Dave Eberly, and is a PDF file: link.

Well, the quaternions can be represented as i times the Pauli matrices, so it makes sense that they’d be related to spin.

Yeah.

That doesn’t follow. There might be an alternative proof that works without associativity. You need a specific counterexample, like the one given at the end of this page.

I think the real question here is whether an area of mathematics exists which Mathochist isn’t an expert on :eek:

Actually, I take that back. I used the distributive law, not associativity. I’m not quite sure why this wouldn’t work for non-associative F.

And at any rate, as ultrafilter said, that doesn’t prove there’s not some completely different way to show that (a,b)(a,b)* = (k, 0). (I didn’t really try to prove this – I was actually trying to answer the question about what the general form of (a,b)(a,b)* would be, and then just sort of tacked that stuff on. I’m not sure, other than by counter-example, how you’d prove that they’re no longer nicely normed when you use Cayley-Dickson on a non-associative F, but I’d like to see it if someone knows how.)

As I posted elsewhere in this very thread, spin-1/2 particles are described by spin-1/2 representations of SO(3,1) (the Lorentz group), which contains as a subgroup that or rotations in 3-dimensional space: SO(3). The unit quaternions form the group SU(2), which is a double-cover of SO(3) and all representations of SO(3) are induced from representations of SU(2). If the kernel of the representation contains -1 (in SU(2)) the induced representation of SO(3) has integral spin, otherwise it has half-integral spin.

More topologically, SO(3) is SU(2)/{1,-1}. SU(2) is a 3-sphere, which is simply-connected. SO(3) is not simply connected, which means that there is a closed loop which cannot be smoothly contracted to a point. However, going around that loop twice gives a loop which can be simply connected (since the simply-connected cover is a double-cover). In terms of actual rotations, turning around once may not “be the same” as staying still, but turning around twice must be.

Analysis. I got through my quals and never looked back. It’s all about bounding this function up or that one down or the other one away from zero or some-such. Frankly, I’m just not that into bondage.

I recall that quarternions were discovered/invented, by the early 19th mathematician (Sir William rowan Hamilton). his colleagues didn’t know what to use them for, so they were generally ignored (until recently).
Just goes to show you…something formerly thought useless may well turn out to be something very useful!
Now, if we could only apply them to analysis of the stock market! :smack:

Bleh. Sorry Mathocist, I’ve been outed as a thread-skimmer once again.

I have to admit, I sometimes find the usefulness of mathematics positively creepy. Especially when the application is to something sufficiently weird as to be completely unintuitive. Reading these links about quaternions and Pauli matrices, which admittedly I barely grasp (due in no small part to the notiation formalities and jargon) tends to make my head hurt; but what I can apprehend with more concerted effort and time occasionally makes the hairs on my neck stand up. Think about it: Hamilton derives this incredibly (to me, anyway) abstract mathematical concept, it lies around for a hundred years or whatever, and then somebody else finds out it can be used to describe bizarre phenomena like spin, perhaps as precisely as can be measured. I think spin is bizarre, anyway. You have to rotate some things twice to make it return to its original state? Whah? And this is precisely calcuable, perhaps even predictable from using abstruse mathematical equations that were invented long before the concept was ever imagined? Maybe there’s nothing deeper going on here than more graspable associations, like drug metabolism and exponential functions, or parametric relationships of a similar kind that amount essentially to “curve fitting”.

FYI, I just read a review on /. of Roger Penrose’s “The Road to Reality: A Complete Guide to the Laws of the Universe.”

here

My next stop after SDMB is the local library. Scifi can wait, I’m going to read a little sci-fact. Or theory, at least.

Saw the same thing when I was over there Jake (/.) I’m off to amazon now. Thanks everybody. I printed out a lot of the info in the links, and am trying to grasp it now. I haven’t decided which book to get, yet. I’m determined to understand this (all the other programmers fled when they saw this code. Being the old-guy, I couldn’t run as fast. So it’s mine.)

Ya’ll are amazing. (This board is amazing)

Thanks again…

Y’all let us know (maybe in Cafe) how that book is. I’ve thought of getting it, but it intimidates the hell out of me. What Is Quantum Mechanics? (A Physics Adventure) made me weep hard enough at my own stupidity; so do I need Penrose to drive the point home? Or can a math idiot like me get some deep knowledge out of it? I’d be interested to hear people’s impressions.

Sorry again for hijacking.

I thought I would hijack this thread since there seems to be quite a few quaternion experts here. I’m currently trying to implement an algorithm that is iteratively trying to find a minima of a function f® where R represents a rotation via a simple hill climbing algorithm.

What I need to figure out is a way to generate rotations that are “close” to any given R. Would quaternions be the best representation for this? And how would I do it?

This “lies around for a hundred years” and “generally ignored (until recently)” stuff is just nonsense.

Hamilton published in 1843. The concept was picked up almost immediately by one James Clerk Maxwell, who used it in his 1873 Treatise on Electricity and Magnetism. This was the thing that lead, inexorably, to Einstein’s special relativity, quantum mechanics, and god-knows-what-else. The whole vector calculus thing took off from this point.

A bit of history here:

http://www.hypercomplex.com/education/intro_tutorial/nabla.html

A bit more of the history:

http://history.hyperjeff.net/hypercomplex.html

http://www-groups.dcs.st-and.ac.uk/~history/HistTopics/Abstract_linear_spaces.html

Hamilton may have invented the Nabla operator to deal with quaternions, but as the article you cited states it was redefined in 1881 in terms of Cartesian unit vectors, not quaternions. Also, the article says that in Maxwell’s Treatise on Electricity and Magnetism, he “writes everthing in coordinate notation, and only then are some of the main final results repeated again in quaternions.” So while it’s certainly not true to say quaternions were ignored, it seems like they were more of an inspiration for the vector calculus used in Maxwell’s electromagnetism and other areas of physics, rather than an essential part of those theories. At any rate, it doesn’t seem absurd to say that it was a convenient twist of fate that the mathematics for dealing with quantum mechanical spin happened to be invented many decades before any physical quantity that behaves like spin was known to exist.