Matrix calculation for dot product?

Is there a way to use a matrix for dot products of vectors, like you can take the determinant of
[i j k]
[Ax Ay Az]
[Bx By Bz]
to get the cross product for AxB?

Well, if you make one vector a row vector, and the second a column vector, and multiply them together according to the ordinary rules for matrix multiplication, then what you’ve just calculated is the dot product of the two vectors.

Seems kinda circular, though, since matrix multiplication essentially involves computing dot products anyway.

Even simpler, make them both row/column vectors and just compute w****v[sup]T[/sup]/w[sup]T[/sup]v.

You might want to clear up that the / in that expression means “or, respectively” rather than division.

Umm… what are row and column vectors?

Quite plainly, what do I need to do in order to do this? (side note: I had a physics test that I didn’t know about today, and dot products were on there… however, we weren’t responsible for knowing how to do it with matrices, since we never learned it.)

Could someone explain this for the matrix-hater?

A row vector is a 1 x n matrix, while a column vector is an n x 1 matrix.

For example, if I have vectors A = (1,6,5) and B = (4,9,3), and I wanted A dot B, I’d set it up this way:


           [4]
[1  6  5]  [9]
           [3]

The first vector I set up as a row matrix, and the second as a column matrix, and then I just multiply them as normal matrix multiplication. Not that this saves any effort, in this case, but it’s a good foundation to start building on for more complicated matrix algebra (one might, for instance, have some other square matrix in between the two vectors).

The only advantage I can see to using matrices instead of just calculating the way you probably learned (AxBx + AyBy + Az*Cz) is that you can multiply matrices on a TI-83 or similar, which might be easier than the formula above if the numbers in the vectors are long and decimal-y and your teacher wants a high level of accuracy.

The real advantage is that both operations are described by a single algorithm. It’s always kinda cool when something like that happens.

There is another advantage, which, granted, wouldn’t do you much good unless you went much further in math: Frequently, when you compute a dot product xy, you’re really applying a linear functional x to a vector y. That is, x lives in the dual space V[sup]*[/sup] of the space V containing y. By using matrices, your notation accords with the fact that x and y live in different vector spaces, just as row and column vectors do.

To be more accurate, the function y -> x ? y is a linear functional. x lives in the same vector space as y, but the dot product specifies an isomorphism between the vector space and its dual.

My post was accurate was it stood, though it was not precise, thanks to my escape-hatch word “frequently” and to my not being explicit about how I was using the notation. But, to make things more precise, I was assuming that a basis for V had been given (as is “frequently” the case in applications), and I was using x ? y as nothing but notation for x(y), so that x is literally an element of the dual space of the space containing y.

Often, of course, the dot product is being used as an inner product on V, but I’m not talking about those cases.