Given a basis g[sub]1[/sub] = (1,-1,2), g[sub]2[/sub] = (0,1,1), g[sub]3[/sub] = (-1,-2,1)
The reciprocal basis is g[sup]1[/sup] = (1/2,-1/6,1/6), g[sup]2[/sup] = (-1/2,1/2,1/2), g[sup]3[/sup] = (-1/2,-1/6,1/6)
The first set of bases is called covariant and the second set of bases is called contravariant, but in non-curvilinear coordinates couldn’t you just reverse this nomenclature? It doesn’t really make a difference does it?
Also if you represent a vector as v[sup]i[/sup]g[sub]j[/sub] this seems to be saying that you have contravariant component and covariant base. What does this mean? The component is just what you multiply times the base isn’t it? Wouldn’t v[sub]i[/sub]g[sub]j[/sub] mean exactly the same thing?
This is pretty advanced stuff, and even here there aren’t many people who’ve dealt with it. Maybe one of our resident physicists will be along to explain it shortly, but you’d probably be better off talking to your professor (I assume this is for a class). Just out of curiosity, what book are you using?
What you’ve discovered is that a finite-dimensional vector space is isomorphic to its dual vector space. Since they are isomorphic, you could switch one for the other in every sentence and still have all your theorems be true. All that’s really important is that you are consistent about which space you call the original space, and which space you call the dual. (Note that this is only for finite dimensional spaces. In general, infinite dimensional spaces are very different from their duals.)
Thanks. But could you please explain why the word “contravariant” is used for the component of a covariant base? Is this just the way it’s done using the Einstein summation convention or does it represent something I’m not seeing?
Ultrafilter I’m using a bunch of different texts but this one is downloadable. Also I don’t have a prof I’m trying to learn this stuff on my own.
Just why the word “contravariant” is used in this context.
If you have v[sup]1[/sup]g[sub]1[/sub] =3 (1,-1,2) then you’re just multiplying 3 times the length of (1,-1,2) in the direction of (1,-1,2). So what makes the component “contravariant”?
Also it would seem that you could exchange the terms contravariant / covariant wrt to bases in a non-curvilinear coordinate system without hurting anything. Once you get into transforming via partials then I can see how these terms have a specific meaning, but not in what I described in my OP.
Let V be your vector space, with two different “covariant” bases: {g[sub]i[/sub]} for i=1…n, and {h[sub]j[/sub]} for j=1…n. Suppose the change-of-basis matrix from the g-basis to the h-basis is A; that is, h[sub]j[/sub] = g[sub]i[/sub]A[sup]i[/sup][sub]j[/sub], where A[sup]i[/sup][sub]j[/sub] is the element in the i-th row and j-th column of A. (I’m using the Einstein summation convention just because it’s easier to type.)
Now suppose v = v[sup]i[/sup]g[sub]i[/sub]. The vector v can also be written in the h-basis; but when we work out the components of v in that basis, we get v=w[sup]j[/sup]h[sub]j[/sub] where w[sup]j[/sup] = B[sup]j[/sup][sub]i[/sub]v[sup]i[/sup] and B is the inverse of A.
In other words, when we transform the base by a linear tranformation A we have to transform the components by the inverse of that transformation. That (in my mind, anyway) is why the base is described as “covariant” while the components are described as “contravariant”.
Now granted, I’m not a physicist; the above explanation is just what seemed to be the case back when I first studied differential geometry. Hopefully a physicist will still wander by and give their two cents.
I’m not far enough along to know for sure but I don’t think so. In curvilinear coordinates the transformation from one base to another must be local and must involve partial rates of change.
However, in non-curvilinear coordinates all you do is 1) form a matrix whose column vectors consist of the covariant basis, 2) invert the matrix 3) read off the row vectors as the contravariant basis.
The chance of this being correct, however, is very small. I don’t know how to interpret this stuff intuitively yet. I think this is also what Math Geek is saying.
Math geek Thanks. I’m going to have to think about your post for awhile but I wanted to respond to Achernar. I’m not sure if I’m saying what you’re saying or not.
Well, I think that matrix is just the Jacobian matrix, which is defined as partial derivatives of one basis with respect to the other. The only difference in this regard between curvilinear and non is that for non, the Jacobian matrix is a constant. This is because the basis vectors in one basis are just linear combinations of the basis vectors in the other. Is that right, somebody who knows?
Okay, on second thought, I have no idea what I was saying in the last post. The Jacobian matrix is defined in terms of the partials of the coordinates, not the basis vectors. And in non-curvy coordinate systems, the coordinates of one system are linear on the other. Mathematically speaking, there is little difference between transforming to a curvy system and a non-curvy one. It just so happens that the partial derivatives will be equal to the representation of the basis vectors.
I might take a stab at this. In a flat, Riemannian space, in Cartesian coordinates, you can use covariant and contravariant objects interchangeably. Since this is the case for most subjects, very few folks worry about the distinction: g[sub]1[/sub] would look exactly like g[sup]1[/sup], and likewise, V[sup]1[/sup] would look just like V[sub]1[/sub]. Since you’re asking the question in the first place, though, I presume that you’re curently taking a subject where it does matter, and my first guess would be a relativity course. Could you give us some context, please? In that case, even in flat Minkowski space, you can’t swap an individual covariant vector with the corresponding contravariant vector. But you can always get away with swapping all contravariant and covariant things: It’s arbitrary saying which set is which.
I’m half way through “Classical Mechanics” by Goldstein, half way through “Classical Dynamics” by Marion Thornton, on the last chapter of “Intro to Electrodynamics” by Griffiths, and I just bought “A First Course in GR” by Schutz. Frustratingly, I keep running into Tensors, so I downloaded the above text and I’m laboriously and confusedly working my way through it.
All the notation I’ve seen has a contravariant component coupled with a covariant basis or vice versa, so what makes a vector contra or co, the base or the component? It seems to me that the basis must determine this, but I’m still confused as to why the component would be called the opposite of whatever the base is. And again thanks for your help.