Vectors and Tensors

If I have 2 vectors u and v then (u dot v)u gives me the projection of v on u. Alternately u****u(v) also gives me the projection of v on u.

Where does the equality u****u(v) = (u dot v)u come from and what is it that makes u****u a tensor and not just a matrix?

Thanks

Actually the projection of v on u is
(u dot v) / u^2 * u .

What does uu(v) mean? I don’t get the notation you are using.

If uu means u[sup]t[/sup]u, and u is a column vector, then it’s pretty easy to see that u[sup]t[/sup]u = u [sup].[/sup] u.

u [sup].[/sup] u is a tensor because every scalar is a tensor of rank 0.

Also, uu has to be a scalar in that equation, or else the dimensions of the left-hand side just don’t match those of the right-hand side.

u****u is a dyad. The text I’m presently looking at writes it this way but I think it’s usually written with a multiplication sign with a circle around it.

I worked it out both ways and the answer comes out the same so it must be right I just can’t figure out how to derive that u****u(v) = u(u dot v)

Do you know anything about indicial notation, Sacroiliac? That makes it pretty clear. Assuming you do, the dyad uu has indices given by

(uu)[sub]ij[/sub]=u[sub]i[/sub]u[sub]j[/sub]

so that uu(v) would be, if I understand your notation,

u[sub]i[/sub]u[sub]j[/sub]v[sub]j[/sub] = u * (u dot ** v**).
This is likely to be useless, so here’s a more intuitive way of thinking about it. The dyad AB is the product of two vectors A and B, and you can just think about it as A acting to the left and B acting to the right.

When you multiply on the right by C, you take the dot product of the right-hand vector B with the vector C, leaving A * (B dot C).

Similarly, if you multiplied on the left by C, you take the dot product of the left-hand vector A with the vector C, leaving B * (A dot C).

I’m not sure if this helps any or not, but that’s what a dyad is, pretty much by definition.

A matrix can be considered as a special case of a tensor. A matrix is a row of columns (or equivalently, a column of rows), but you can also have tensors which act like a row of rows, or a column of columns. In index notation (also referred to as Einstein notation), you denote this by whether the index is a superscript or subscript (“upstairs” or “downstairs”), and strictly speaking, for a dot product, u dot v = u[sup]i[/sup]v[sub]i[/sub] (or u[sub]i[/sub]v[sup]i[/sup]), not u[sub]i[/sub]v[sub]i[/sub].

The distinction between rows and columns, or between upstairs and downstairs indices, is unimportant in Euclidean Cartesian coordinates, but it becomes very important in other coordinate systems (as, for example, if you’re doing relativity).

Thanks everyone. Chronos covariant/contravariant drove me crazy till I found out they were just each others dual spaces. For 3 dimensions the inner product is a lot easier with 3 terms than 9 terms.

This made my head spin around. Could you explain?

Sacroiliac, think three dimensions instead of two, if you must. At the most basic, the mathematics of tensors deal with generalizing how you can transform pairs of vectors.