# HELP! Tensors in semi-laymen's terms?

Hi, I’m a practicing Electrical Engineer, and my girlfriend is Chemical Engineering PhD student, and we’re both stumped by Tensors. She’s got to use them in one of her classes, the prof. assumes everyone knows what they are and how to use them. We’ve searched the web in vain for a good explanation, but can’t get one that doesn’t involve a paragraph of Incomprehensible Math gibberish.

(example: Check out how quickly Britannica’s explanation switches from useful to inscrutible.
http://www.britannica.com/bcom/eb/article/9/0,5716,120659+10+111001,00.html
)

So far I’ve gathered this:

• A tensor is a superset that includes scalars, vectors, matices etc. (a scalar is a rank 0 tensor, vector, rank 1, etc)
• A tensor is the mapping of a vector in one coordinate system to a corresponding vector in another coordinate system. (this definition sounds like a matrix to me)
• A tensor is a quantity involving two related vectors, like the vector of stress on a point and the vector of the point’s position in space.

So which is it? and if all three, how are they related?

Also, what’s a Tensor product? Dot and Cross products we know and love, but what’s a Tensor product and how do you take one? (I gather that result is a matrix, unlike Dot (scalar) and Cross (vector))

HELP!

Ron

I always screw up when I try to explain mathematical concepts here, especially ones that are just beyond my grasp. I know the words “covariant” and “contravariant” are important.

I doubt you’ll get a sufficient explanation about what tensors are and how to use them from the internet, especially if you reject the mathematical explanations.

Your info is correct, as far as it goes. Structurally, a tensor is just a collection of numbers. A single number (a scalar) is a rank 0 tensor, a row of numbers (a vector) is a rank 1 tensor, etc.

One way to use tensors is to map one vector to another, which kind of encompasses your 2nd and 3rd definitions. I wouldn’t say that’s what a tensor is, but that’s a fairly common usage. And yes, it sounds similar to a matrix because matrices are often used the same way.

A tensor product is a generalization of the dot and cross products. It defines an operation that relates two tensors and the result might be a scalar, a vector, a matrix, another tensor of the same or lesser rank, etc. There is no single tensor product – there are different ones.

The good news is that tensors are generally used either as a computational tool, in which case you just have to follow the (sometimes lengthy) steps or use a computer program; or as a theorem proving device in which case the details of the tensor are less important than its properties.

Like most fields of mathematics tensor analysis has its own notation. Because tensors are usually very large objects the notation is more cryptic than usual, IMHO. Superscripts and subscripts are significant, repeated subscripts implies a summation (“the Einstein summation convention”, just to let you know who you’re keeping company with), etc.

My advice to you is to hop to the nearest Borders or Barnes & Noble and get a Dover book on tensors or tensor analysis. Dover books tend to be inexpensive and thorough and keyed to the beginning user.

First off, let me say that any teacher who assumes that everyone knows what tensors are is a lousy teacher.

Now, then, I do General Relativity, so I’m rather familiar with the beasts. It can be somewhat difficult to describe what a tensor is, but it’s much easier to descibe how to represent it and how to use it. Generally, tensors are represented by matrices, and if the tensor in question is a second-rank tensor, as most are (the only one I’ve ever seen that was higher is the fourth-rank Riemann curvature tensor), it’s a two-dimensional matrix with rows and columns, like you’re familiar with. So long as you remember to match covarient tensors with contravarient tensors (more on this in a moment), the tensor multiplication rules look just like the matrix multiplication rules.

Now, as to how to use them: The key to realize here is that usually, when you’re working with tensors, you’re not actually working with the tensors themselves, but just the components of the tensors. The components are just perfectly ordinary scalars, and the final answer of any problem involving tensors will always wind up being a number. Since the components are just scalars, all the normal arithmetic rules apply, and you can deal with them normally.
For example, suppose I have two tensors, T and U. Let’s suppose that they’re both second-rank tensors, so that each element is described by two indices (row number and column number, for matrices), and let’s also suppose that they’re both three-dimensional, so that each index has three possible values. For instance, one element of T might be designated T[sup]12[/sup], or T[sup]33[/sup] or T[sup]23[/sup]. (Of course, there’s six other elements of T that I’m not mentioning there.) In general, we might represent a general element of T as, say, T[sup]ij[/sup], and a general element of U as U[sub]m[/sub][sup]n[/sup]. Notice that I have both of the indices for T upstairs, but the first U index is downstairs: Upstairs indices are called contravarient indices, and downstairs ones are called covarient. It’s important to keep track of which is which when multiplying, and you can get one from the other using something called a metric.
Now, let’s multiply these two tensors. We do this by choosing one index from each to contract (such as the second index of T, and the first index of U), and writing them both in their component form: T[sup]ij[/sup]U[sub]j[/sub][sup]k[/sup]. Note that we used the same letter for the two indices we’re contracting, and that one was upstairs and one was downstairs. This operation will get us a third tensor, which we we call V: T[sup]ij[/sup]U[sub]j[/sub][sup]k[/sup] = V[sup]ik[/sup]. The way we get the elements of V is using something called the Einstein Summation convention: Whenever we see the same index upstairs and downstairs on the same side of an equation, we sum over all possible values of that index. In our case, since we said that these are 3-dimensional tensors, there’s three possible values, so T[sup]ij[/sup]U[sub]j[/sub][sup]k[/sup] = T[sup]i1[/sup]U[sub]1[/sub][sup]k[/sup] + T[sup]i2[/sup]U[sub]2[/sub][sup]k[/sup] + T[sup]i3[/sup]U[sub]3[/sub][sup]k[/sup]. Remember, when we have the indices attached to T and U, we’re talking about the components, not the whole tensor at once, so the math is simpler. Now, if we want (for instance) the 1,3 component of V, we can say that i=1 and k=3, so V[sup]13[/sup] = T[sup]11[/sup]U[sub]1[/sub][sup]3[/sup] + T[sup]12[/sup]U[sub]2[/sub][sup]3[/sup] + T[sup]13[/sup]U[sub]3[/sub][sup]3[/sup]. This is just an equation for one number (the 1,3 element of V) in terms of some other numbers (the components of T and U).

I definitely agree with that statement. Unfortunately, Gonzoron’s professor is all too common. FWIW, I have a BS in Chemical Engineering plus 3 years twords a Master’s, and I still have plenty of trouble dealing with tensors.

This is a pretty good site for a reasonble explanation of tensors.

http://www.physics.purdue.edu/~hinson/ftl/html/FTL_part3.html#sec:tensors

Chronos You do GR huh, in that case I may have some questions for you . What’s a nice GR guy like you doing in a place like this?

Wow, Ring, THANKS for the cool link! I paged up and discovered stuff to warm the cockles of any dedicated SF reader!

The answers above are right but perhaps I can help clarify. There are three ways of multiplying two vectors together. Multiply them with a dot between them (the dot product), and you get a scalar. AB is a scalar. Multiply them with a cross between them and you get a vector. A×B is a vector. Multiply them with neither a dot nor a cross and you get a dyadic (a second-order tensor). AB is a dyadic. Presumably you know the mechanics of doing the cross product and dot product. See the example below for the mechanics of the tensor product.

i, j, and k are the unit vectors in the x-, y-, and z-directions.

A = 3i + 0j + 5k
B = 7i + j + 0k
AB = 21ii + 3ij + 35ki + 5kj

ii, ij, etc., are the unit dyads. In three-dimensional space, there are nine unit dyads (but in my example, five of them have coefficient zero). (Note that ij is not the same as ji). The tensor multiplication of a vector and a dyadic yields a triadic with unit triads (in three-dimensional space) of the type iii, iij, etc.

I could sit here and read Chronos talking about applied mathematics all day.

If I were an Empress, I would have an Imperial Court Mathematician, and I would have maths supervisions once a week. I might even do homework for them.

Dyadics!!! Yes, Dyadics!! 2nd Order tensors, that’s exactly what she’s working with. Tell me more! What does the Dyadic signify geometrically? Cross Product is a vector perpendicular to the 2 operands, with length proportional to the area of the paralellogram. (direction according to thr Right hand rule) Dot product is the Length of the shadow one vector makes on the other. Does the dyadic signify something like that?

One example in the class: an imaginary cube in a current of fluid. The stress on the cube is a dyadic… somehow. There’s a vector of pressure on each face of the cube, and somehow you can wrap all 6 of the vectors up in one dyadic. There’s an equation involved: x(underbar,underbar)=a(underbar)b(underbar)+c(underbar)d(underbar) … x is a dyadic, a,b… are vectors, but which ones? Ideas?

Thanks everyone for all your help so far. It’s been very illuminating. Just hasn’t exactly applied to the problem at hand yet.

I knew I had a book that described this, but it took me a while to find it. I honestly don’t understand this sort of thing as well as I should, but I’ll take a crack at it.

The dot product of a vector with a dyadic is a vector. Let AB be the dyadic and D be the vector. I won’t prove it here, but D•(AB)=D•A(B), which is a scalar times a vector, or a vector.

Call the Stress dyadic T. Call the surface vector S (it has the magnitude and units of the area of the surface, and its direction is normal to the surface). S•T is then the vector force on the surface. To find the net force on the six surfaces of the cube, find the vector force on each of the six sides and take the vector sum of them.

I hope this helps. The book I’m relying on is Theoretical Physics: Mechanics by F. Woodbridge Constant. Dyadics are described starting on p. 41, and the stress dyadic is described on p. 201.

Took a grad. course in Tensors once. Got an ‘A’. Had no idea what they represented. Just pushing symbols around. (I’m not a Math idiot: Got a 990 on the adv. math GRE.)

Ever since I’ll ask the odd (redundant?) math prof. to give a short explanation of what a tensor is. None of them can. (But the same can prob. be said about any “short explanation” question asked of a math prof.)

The URL given is typically. It doesn’t explain in any sort of simple, easy to understand way. Math people just can’t write well (and they don’t care either). I can read the abstract of any physics, biology, chemistry paper and get a feel for what’s in it and why the work is being done. Not so for math papers. In my area (theoretical computer science) the papers used to be fairly readable but not any more. The trend is to emulate math style writing. Ack.

At least the OP didn’t ask what a Banach space was.

FtG aka GLP
“Infinitely norm” this.

Everyone’s going to hate me for this, but… A cross product of two vectors doesn’t really get you a vector. It gets you an antisymmetric tensor which transforms as a pseudovector, and has n(n-1)/2 independent components, where n is the dimensionality of the vectors you crossed. It’s just lucky for us that we live in a world with 3 spatial dimensions, so that the result of a cross product can be made to look like a vector: If you take two four-dimensional vectors and take the cross product of them, you end up with a 4 by 4 tensor with six independent components. Furthermore, if you take a vector and put it through a parity transformation, it’ll reverse sign, but if you take a pseudovector (like the cross product of two vectors) and put it through a parity transformation, it’ll retain the same sign.

Really?

Chronos wrote:

What do mean by independent components? If you r X F in two dimensions you get the torque which is also a two dimensional vector. According to the formula it should be a vector with one independent component. So I obviously don’t know what independent means.

What do you mean by “r X F in two dimensions”?

I have no idea. Obviously, you can’t have a 2 dimensional cross product. If the r and F vectors are in the xy plane then the cross product will be in the z direction.

I couldn’t possibly have made the above statement - it must have been someone else (Please help me find someone to blame this assinine statement on)

I don’t claim to fully understand the subtleties of what chronos said, but I think Ring’s making sense.

To my way of thinking, you can take a cross product in two dimensions, and result is a scalar, which agrees with n(n-1)/2.

In a two-dimensional world, torque is a scalar quantity (as opposed to the vector quantity that it is in 3-d). For example, if r = (1,0) and F = (0,1), Torque = r X F = 1. Positive torque, by convention, represents counter-clockwise.

Extending that 2-d world to the x-y plane in three dimensions, we get r X F = (1,0,0) X (0,1,0) = (0,0,1).

Except for the part about r X F being a two dimensional vector. But Ring already said he didn’t say it. And I think he’s even gonna help us find the guy who did…

He’s OK.

Because you mentioned Tensors and your girlfriend in one sentence, I was reminded of the famous (to me, anyway) poem by Stanislaw Lem “Love and Tensor Algebra”

Come, let us hasten to a higher plane
Their indices bedecked from one to n
Commingled in an endless Markov chain!

And that’s just the first verse… find the full text at:
http://www.ee.duke.edu/~wrankin/misc/tensor.html

Its quite romantic (to the right type of girl ).

brad_d’s got it. Saying that the tensor has so many independent coordinates just means that it takes that many numbers to fully specify the thing. A scalar, indeed, has one independent component, and 2-D “torque” is a scalar.

You might notice, then, the self-consistency of Chronos’s statements:

He says we have a 4x4 tensor. That means it looks like T[sup]ij[/sup] with i and j taking on values 0 through 3. Now, a 4x4 tensor has sixteen components, a priori. But, he also tells us that ours is antisymmetric. Antisymmetry requires T[sup]ij[/sup] = -T[sup]ji[/sup]. This implies that the diagonal elements (T[sup]00[/sup], T[sup]11[/sup], etc.) are all zero. That takes us down to twelve independent components. But, half of these components are just the negative of the other half (antisymmetry). Thus, we have only six independent components if we have a 4x4 antisymmetric tensor.

This counting is exactly where the n(n-1)/2 thing comes from.