The Straight Dope

Go Back   Straight Dope Message Board > Main > General Questions

Reply
 
Thread Tools Display Modes
  #1  
Old 05-02-2012, 12:56 AM
Jragon Jragon is offline
Member
 
Join Date: Mar 2007
Location: Miskatonic University
Posts: 8,967
Explain/confirm why this matrix is possible (Cross product determinant method)

I have a question about not why a specific formula for the cross product works (the math is easy enough), but mathematically, why part of it is a valid construction. Simply put, professors and grad students I've asked from different fields over a few years have more or less said "When I asked the same question in your shoes I was told the same answer: I don't know."

So here's the question, the cross product:

a x b = c

Can be validly computed using the following method

Code:
       [ i   j   k ]
c =det [ a_x a_y a_z ]
       [ b_x  b_y  b_z ]
Where i,j,k are the standard orthonormal basis vectors in R3: [1,0,0]; [0,1,0]; and [0,0,1]

But here's the rub: the elements of the matrix aren't in a uniform space. You can't have elements in R3 AND elements in R1 (which is the confusing part most instructors/professors also acknowledge), and if you expanded them you'd have a 5x3 matrix... which you can't take the determinant of.

So how is this possible? Through my thinking, I've come across one potential explanation. I'm not too familiar with quaternions, only having used them briefly (and outside of any formal setting at that), but if you take a_x ... b_z as a_x * 1 ... b_z * 1 (where 1 is the quaternion [1,0,0,0]), and i,j,k in R3 as their corresponding quaternions (which I understand is done without hesitancy rather often), then you have a square matrix with every element in H. Since 1 * i, 1 * j, 1 * k, and 1 * 1 are defined as identity operations you maintain, for instance, (a_y * 1) * (b_z * 1) * i = a_y * b_z * i, which makes the arithmetic consistent as well.

So is that it, and physics and basic linear algebra or graphics courses don't want to have to introduce quaternion algebra just to give a simple cross product formula? Can you really just simply have a matrix in a mixed space and I was lied to by my linear algebra course? The reason I doubt my, er... "discovery" is that 4 very well educated people who teach how to compute and use the cross product never even gave a hint that this may be the reason, but then I guess they may have never bothered thinking about it much.

Unfortunately, every source I find that defines the cross product more or less just mentions the determinant method using the matrix I gave without explaining it. Wikipedia mentions taking the determinant of the "formal matrix" and while I can't really find many sources for the term "formal matrix" I infer that "formal matrix" just adds an extra word onto "matrix" so we don't confuse it with... I dunno, a Keanu Reeves/Wachowski Brothers film I guess? (Though I'm not sure they're successful at disambiguation. There were a lot of suits in that movie, that's pretty formal).
Reply With Quote
Advertisements  
  #2  
Old 05-02-2012, 01:58 AM
Chronos Chronos is offline
Charter Member
 
Join Date: Jan 2000
Location: The Land of Cleves
Posts: 55,274
I think it's just a quirk that this pseudo-method happens to give the right answer. There's nothing analogous for any other number of dimensions.
__________________
Time travels in divers paces with divers persons.
--As You Like It, III:ii:328
Reply With Quote
  #3  
Old 05-02-2012, 02:00 AM
Jragon Jragon is offline
Member
 
Join Date: Mar 2007
Location: Miskatonic University
Posts: 8,967
Quote:
Originally Posted by Chronos View Post
I think it's just a quirk that this pseudo-method happens to give the right answer. There's nothing analogous for any other number of dimensions.
I thought of that, but isn't the cross-product technically only defined in 3-dimensional space? So it only really needs to work in R3. Of course, I'm perfectly willing to accept the "it's a fake method with no basis in reality that works because it gives the right number" answer too.

Last edited by Jragon; 05-02-2012 at 02:03 AM..
Reply With Quote
  #4  
Old 05-02-2012, 06:01 AM
Hari Seldon Hari Seldon is offline
Guest
 
Join Date: Mar 2002
Yeah, think of it as more like a mnemonic device for the cross product. There is nothing like it in any other dimension. Think about it. The characteristic property of the cross product of two vectors is that it lies in the unique direction that is perpendicular to each of the factors. In no other dimension is it possible to have such a unique dimension.

Here is another (random and maybe incorrect) thought. Suppose you take three vectors, say u,v,w in a four dimensional space. Form the matrix
[ i j k l ]
[u1 u2 u3 u4]
[v1 v2 v3 v4]
[w1 w2 w3 w4]

(here i, j, k, l will be an orthonormal basis for the 4-dimensional space) and expand the determinant in the usual way, temporarily ignoring that i,j,k,l are vectors and I bet you get a vector in the unique direction perpendicular to u,v,w. BTW, whether it points "up" or "down" will depend on whether the orientation of u,v,w is positive or negative.

Just speculating.
Reply With Quote
  #5  
Old 05-02-2012, 06:19 AM
Typo Knig Typo Knig is offline
Guest
 
Join Date: Dec 2005
You can define cross product in higher dimensions (vector times vector gives new vector), using the Levi-Civita symbol. http://en.m.wikipedia.org/wiki/Levi-Civita_symbol#_

My recollection wad that the determinant trick doesn't work in more than 3 dimensions, but Wkik says the Levi-Civita symbol can be used to calculate the determinant in N dimensions.

In any case you won't use anything with higher dimensions in college intro physics, so don't sweat it. Should you run into higher-dimensional calculations later in life, you'll be using matrices.
Reply With Quote
  #6  
Old 05-02-2012, 06:22 AM
Jragon Jragon is offline
Member
 
Join Date: Mar 2007
Location: Miskatonic University
Posts: 8,967
Quote:
Originally Posted by Typo Knig View Post
You can define cross product in higher dimensions (vector times vector gives new vector), using the Levi-Civita symbol. http://en.m.wikipedia.org/wiki/Levi-Civita_symbol#_

My recollection wad that the determinant trick doesn't work in more than 3 dimensions, but Wkik says the Levi-Civita symbol can be used to calculate the determinant in N dimensions.

In any case you won't use anything with higher dimensions in college intro physics, so don't sweat it. Should you run into higher-dimensional calculations later in life, you'll be using matrices.
Actually, in this case it was a question that most recently came up when doing computer graphics -- granted this is still rather attached at the hip to R3.

ETA: We don't even go over matrices (or bases, or the meaning of a determinant, or even use row/column vectors instead opting for long-winded component forms or magnitude+angle-off-axis forms) in intro physics, just the determinant for about 13 seconds to compute the cross product (and even then, we're encouraged to use the right hand rule instead). If it was just physics that brought it on -- and I'd never taken linear algebra/been doing graphics, I'd probably never have even noticed.

Last edited by Jragon; 05-02-2012 at 06:25 AM..
Reply With Quote
  #7  
Old 05-02-2012, 07:08 AM
Thudlow Boink Thudlow Boink is offline
Charter Member
 
Join Date: May 2000
Location: Springfield, IL
Posts: 18,188
Quote:
Originally Posted by Jragon View Post
Wikipedia mentions taking the determinant of the "formal matrix" and while I can't really find many sources for the term "formal matrix" I infer that "formal matrix" just adds an extra word onto "matrix" so we don't confuse it with... I dunno, a Keanu Reeves/Wachowski Brothers film I guess?
I would interpret "formal matrix" in this context as meaning that it's in the form of a matrix, even though it may not be a "proper" matrix according to the definition.
Reply With Quote
  #8  
Old 05-02-2012, 07:09 AM
ZenBeam ZenBeam is offline
Charter Member
 
Join Date: Oct 1999
Location: I'm right here!
Posts: 8,636
I always thought the analogue of a cross product in higher dimensions would take more input vectors, one less than the number of . So in four dimensions, you would have a X b X c = d. I'd expect d would be perpendicular to a, b, and c, although I never verified that. You'd probably have to define it using a form equivalent to the determinant of a 4 X 4 matrix (n X n for higher dimensions).

I think this even works in 2-D, where there's only one input vector: X a = b.

Pretty sure none of this works in other than Cartesian coordinates (ETA: I don't think...).

Last edited by ZenBeam; 05-02-2012 at 07:12 AM..
Reply With Quote
  #9  
Old 05-02-2012, 10:31 AM
Pasta Pasta is offline
Charter Member
 
Join Date: Sep 1999
Posts: 1,721
Getting the cross product expression correct involves handling the cyclical coordinate changes correctly. So, if the x-directed component of A x B in Cartesian coordinates is (AzBy-AyBz), then you can get all the other components by shifting the letters up one (x-->y, y-->z, z-->x) or up two (x-->y-->z, y-->z-->x, z-->x-->y).

It turns out that a 3D matrix's determinant has a similar cyclical property in its terms (as do may other constructions in mathematics), and since this is what people often mess up, casting the cross product into the matrix formalism is helpful. But there isn't anything meaningful about the matrix otherwise. (Its inverse, for instance, would be gibberish.)
Reply With Quote
  #10  
Old 05-02-2012, 11:38 AM
Indistinguishable Indistinguishable is offline
Guest
 
Join Date: Apr 2007
Quote:
Originally Posted by Chronos View Post
I think it's just a quirk that this pseudo-method happens to give the right answer. There's nothing analogous for any other number of dimensions.
Oh, but there is...

Exactly as ZenBeam notes and Hari Seldon speculates, in N dimensions (for N > 0), we can define an (N - 1)-ary antisymmetric multilinear operator whose output is always perpendicular to each of its inputs. And this can always be done by the analogue of the formula given in the OP. The reason being as follows:

The determinant is the essentially unique N-ary antisymmetric multilinear operator (essentially unique because the Nth exterior power of an N-dimensional space is (N choose N) = 1-dimensional).

Since the determinant is multilinear, we can think of it as a map from V^N [equally, V^(N - 1) x V] to R where x denotes tensor product and ^ denotes repeated tensor product. This can be "curried" into a map from V^(N - 1) to (V -> R) [where -> denotes the linear function space]. The dot product yields an isomorphism between (V -> R) and V, and pulling our curried determinant through this gives a multilinear map from V^(N - 1) to R.

In other words, we may define an antisymmetric multilinear operator L by the definition L(v_2, ..., v_n) . v_1 = Det(v_1, v_2, ..., v_n). Since the determinant becomes 0 whenever a vector is repeated, we have that the output of L is always perpendicular to each of its arguments. In the same way, as the determinant measures N-dimensional volume, we can see that the magnitude of the output of L is, in fact, the (N - 1)-dimensional volume of the parallelepiped formed by its arguments.

And how does this definition correspond to our matrix "trick"? Well, if you want to see a vector analyzed as the sum of its coordinate components, you can formally "dot" it with a "vector" specifying the basis: if e_i = the basis vector in the i-th direction, then the component of v in the i-th direction is (v's i-th coordinate) * e_i, and v is the sum of all these components, so, formally, v = v . <e_1, e_2, ..., e_n>.

Accordingly, L(v_2, ..., v_n) = L(v_2, ..., v_n) . <e_1, e_2, ..., e_n> = Det(<e_1, e_2, ..., e_n), v_2, ..., v_n). This is the matrix trick.

Last edited by Indistinguishable; 05-02-2012 at 11:42 AM..
Reply With Quote
  #11  
Old 05-02-2012, 11:51 AM
Indistinguishable Indistinguishable is offline
Guest
 
Join Date: Apr 2007
In other words, think of a vector whose components are vectors as a 2-tensor. The vector <e_1, ..., e_n> made of basis vectors is the 2-tensor corresponding to the identity matrix. Putting this in place of an ordinary vector v in any linear formula Fv will yield the tensor corresponding to F. And, swapping covariance/contravariance in one's interpretation of the coordinates of a tensor amounts to pulling through the dot product isomorphism.

Thus, det(<e_1, ..., e_n>, v_2, ..., v_n) outputs the coordinates of the tensor corresponding to the linear map Fv = det(<e_1, ..., e_n>, v_2, ..., v_n), which are also the coordinates of the vector G such that G . v = det(v, v_2, ..., v_n) for all v, which is precisely the vector which is perpendicular to each of v_2, ..., v_n and has magnitude/orientation given by the oriented volume of the parallelpiped produced by v_2, ..., v_n.

Last edited by Indistinguishable; 05-02-2012 at 11:52 AM..
Reply With Quote
  #12  
Old 05-02-2012, 12:34 PM
Indistinguishable Indistinguishable is offline
Guest
 
Join Date: Apr 2007
Quote:
Originally Posted by Indistinguishable View Post
Thus, det(<e_1, ..., e_n>, v_2, ..., v_n) outputs the coordinates of the tensor corresponding to the linear map Fv = det(v, v_2, ..., v_n), which are also ...
Correction in bold. (I should write these things, walk away, come back, re-read them, and only then post them... Another example of this is that I think the last post is much clearer than the one before it [save for the typo, of course], and ought to have replaced it)

Last edited by Indistinguishable; 05-02-2012 at 12:36 PM..
Reply With Quote
  #13  
Old 05-02-2012, 01:52 PM
Pleonast Pleonast is offline
Charter Member
 
Join Date: Aug 1999
Location: Los Obamangeles
Posts: 5,283
Quote:
Originally Posted by Hari Seldon View Post
The characteristic property of the cross product of two vectors is that it lies in the unique direction that is perpendicular to each of the factors.
Not quite unique, as anyone who's watched wildly gesticulating physic students knows.
Reply With Quote
  #14  
Old 05-02-2012, 02:00 PM
Saint Cad Saint Cad is online now
Guest
 
Join Date: Jul 2005
Just a WAT (wild ass thought) on part of the OP
The magnitude of the resulting vector is the area of the parallelogram bordered by the vactors, right? (OK, I know the vectors only border two sides but you know what I mean) which is double the area of the triangle formed by two vectors and the segmen between their endpoints. With me so far?

the formula for the area of a triangle in two dimensions with points (x1, y1), (x2, y2), and (x3, y3) is
0.5 det | x1 y1 1 |
| x2 y2 1 |
| x3 y3 1 |

Given two vectors A and B and the OP's notation the magnitude of the cross product if the two veectors were in a plane would be

det | 0 0 1 |
| a_x a_y 1 |
| b_x b_y 1 |

which is a_x b_y - a_y b_x which is the exact result if a_z = b_z = 0
incidently, with the two vectors in the xy plane the cross product is the z-axis which also matches the OP's results if a_z = b_z = 0.

So that is why it works in the special case. I would have to think of how it expands to the general case.

eta: or just look at wikipedia Determinant and the 2x2 case to see how it is used to get the area beetween 2 vectors.

Last edited by Saint Cad; 05-02-2012 at 02:04 PM..
Reply With Quote
  #15  
Old 05-02-2012, 02:31 PM
Chronos Chronos is offline
Charter Member
 
Join Date: Jan 2000
Location: The Land of Cleves
Posts: 55,274
Quote:
Not quite unique, as anyone who's watched wildly gesticulating physic students knows.
Do you always volunteer to proctor that test, too? Hours of fun, watching the students. Especially the 2 or 3 who are trying to do it with their left hand.

And Indistinguishable, I was thinking of the higher-dimension analogue to a cross product being a binary tensor operation (which can't usually be identified with something that looks like a vector), not as a (n-1)ary (pseudo)vector operation. But I suppose that either is an equally valid generalization.
Reply With Quote
  #16  
Old 05-02-2012, 02:37 PM
Indistinguishable Indistinguishable is offline
Guest
 
Join Date: Apr 2007
Sure; I was just focusing on the generalization which reveals what was going on with this method (trying to counter the otherwise default "it's a fake method with no basis in reality that works because it gives the right number" presumption). The point being that it's not some coincidental quirk; it works for good reason.

Last edited by Indistinguishable; 05-02-2012 at 02:38 PM..
Reply With Quote
  #17  
Old 05-04-2012, 06:01 PM
Manlob Manlob is offline
Guest
 
Join Date: May 2000
To get the direction of c=a x b, there are three equations to be solved for the components of vector c:
a x b = i*c_x + j*c_y + k*c_z (by definition that c= a x b)
0 = a.dot.c = a_x*c_x + a_y*c_y + a_z*c_z (c must be normal to a)
0 = b.dot.c = b_x*c_x + b_y*c_y + b_z*c_z (c must be normal to b)

In matrix form these equations are:
Code:
{axb}   [ i  j  k]   {cx}
{ 0 } = [ax ay az] * {cy}
{ 0 }   [bx by bz]   {cz}
Call the above 3 by 3 matrix D. By Cramer's rule the determinate of D is:
det(D) = axb * A_11/cx = axb * A_12/cy = axb * A_13/cz
As det(D) and axb are the only vectors in the above, this shows that det(D) has the same direction as axb with scalar scaling factor A_11/cx = A_12/cy = A_13/cz. A_11=ay*bz-by*ax=cx, so the constant scaling factor is 1, thus det(D) equals axb.
Reply With Quote
  #18  
Old 05-04-2012, 07:46 PM
Jragon Jragon is offline
Member
 
Join Date: Mar 2007
Location: Miskatonic University
Posts: 8,967
While your explanation is nice, Manlob, I don't think it really answers the fundamental question of why we're allowed to have a matrix in a mixed space. I understand that it works, and thanks to Indistinguishable and you I understand why it works generally, and that it's not a mathematical fluke, but nobody has given a reason yet why it's possible (within the usually used mathematical definition of matrices and vectors -- which usually involves all elements belonging to a uniform domain) to have a matrix with elements both in R3 and R1.
Reply With Quote
  #19  
Old 05-04-2012, 08:36 PM
Manlob Manlob is offline
Guest
 
Join Date: May 2000
The matrix here has vectors only one row, and in evaluating the determinant vectors are only multiplied by scalars. There is no vector times vector in those equations. Certain operations could be a problem, like if you tried dividing by a vector or finding the inverse matrix of D.
Reply With Quote
  #20  
Old 05-05-2012, 12:42 PM
Indistinguishable Indistinguishable is offline
Guest
 
Join Date: Apr 2007
Quote:
Originally Posted by Jragon View Post
but nobody has given a reason yet why it's possible (within the usually used mathematical definition of matrices and vectors -- which usually involves all elements belonging to a uniform domain) to have a matrix with elements both in R3 and R1.
R3 and R1 are both subspaces of the tensor algebra generated by R3 (and in the same way, for any vector space/module V over the field/ring of scalars R, both V and R are subspaces of the tensor algebra generated by V). If you want to think of this as a uniform matrix, you can think of it as a matrix whose elements uniformly come from this tensor algebra.

Last edited by Indistinguishable; 05-05-2012 at 12:45 PM..
Reply With Quote
Reply



Bookmarks

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is Off
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 02:45 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2014, vBulletin Solutions, Inc.

Send questions for Cecil Adams to: cecil@chicagoreader.com

Send comments about this website to: webmaster@straightdope.com

Terms of Use / Privacy Policy

Advertise on the Straight Dope!
(Your direct line to thousands of the smartest, hippest people on the planet, plus a few total dipsticks.)

Publishers - interested in subscribing to the Straight Dope?
Write to: sdsubscriptions@chicagoreader.com.

Copyright 2013 Sun-Times Media, LLC.