I want to learn relativity and 'rocket science'. What courses do I need to take?

With the caveat that most of those elements probably aren’t independent. The Riemann tensor, for instance (which is the only fourth-rank tensor you’re likely to encounter) has so many relationships between its various elements that it only actually has 20 independent elements.

There is a way to do it without tensors. You can use the “exterior calculus” using so-called “differential forms”. It is a beautiful way to do the mathematics, but harder to understand than tensors.

The beauty of differential forms is that all of the symmetry of the tensors is built in to the differential forms themselves. Everything is totally antisymmetric, so you don’t have a bunch of redundant matrix elements as you do in the tensor formulation. You also discover why the cross product of two vectors is not really a vector, it just has the same number of independent components as a vector.

Actually, we did nibble around the edges of matrices, now that I think about it. And we did Laplace transforms in third year (time domain to frequency domain), but they were basically black magic.

I was taking electronics engineering technology, a three-year diploma at community college rather than a four-year engineering degree at university. So while we touched on a lot of stuff, we never got as deeply into the theory as engineering students at university did.

Here’s the present course at the same institution, but of course it’s changed in the 24 years since I graduated. And Sheridan has as well… it now offers some university courses, calls itself an ‘Institute of technology and advanced learning’, and I expect it will morph into a university in the next ten years or so. Basically following the same road that Ryerson University in Toronto did.

Yes. Relativistic corrections are needed for high precision satellites, i.e. GPS, GLONASS, and Galileo navigation satellites, but for space probes and most communication and surveillance satellites any relativistic influences are masked by other measurement errors and gravitational perturbations, and are addressed by en route trajectory corrections. Although spacecraft navigation and mission planning are really, really difficult, multi-valued “wicked” problems, plotting an intercept trajectory or a transfer to a defined orbit is really pretty easy, and plotting a two-body conic section orbit requires only planar geometry (as long as you define your coordinate system conveniently). Prussing’s Orbital Mechanics is the best general reference I’ve found for this which should be accessible on the undergraduate level. Thomson’s [/url=http://www.amazon.com/Introduction-Dynamics-William-Tyrrell-Thomson/dp/0486651134/ref=pd_sim_b_2]Introduction to Space Dynamics is pretty good and readily accessible reference.

“Rocket science” is more than just orbits, of course; the details of propulsion systems are critical to plotting the trajectories of real spacecraft, and making space vehicles with mechanisms than can deploy instruments and solar panels, and function reliably for years without any ability for repair or maintenance requires a lot of effort and testing. There are several different technical disciplines in and of themselves, and each of them would require years of schooling and experience to have even a basic functional grasp.

Stranger

It doesn’t look like anyone’s given the full answer to what tensor is.

To understand what a tensor is, you only need to understand what (1) a vector space is, and (2) what a linear map is.

I’ll assume that you know what a vector space is. A linear map is then just a map between vector spaces that respects the vector operations, which are vector addition and scalar multiplication. That is, a map L between vector spaces V and W is a map such that L(u + xv) = L(u) + x L(v) for all vectors u and v in V and scalars x.

(You often hear that linear maps are matrices, but this is misleading. If you fix bases for all your vector spaces, then every linear map corresponds to a specific matrix. But the linear map and the matrix aren’t the same thing. It’s better to think of the matrix as a computational device that lets you see how the linear map acts. By analogy, consider the functions that compute the positive integer powers of real numbers. For each positive integer n, there is a function f[sub]n[/sub] such that f[sub]n/sub = x[sup]n[/sup]. But the function f[sub]2[/sub] isn’t equal to the number 2; it just corresponds to the number 2.)

Once you know what a linear map is, the concept of a multilinear map is easy. Let V[sub]1[/sub], . . . , V[sub]n[/sub] be some finite sequence of vector spaces. (This sequence can contain multiple copies of one vector space.) Call these the source spaces. Let W be another vector space, called the target space. A multilinear map M with these sources and this target is a function that takes in a vector v[sub]1[/sub] from V[sub]1[/sub], v[sub]2[/sub] from V[sub]2[/sub], . . . , and v[sub]n[/sub] from V[sub]n[/sub], and then produces a vector

M(v[sub]1[/sub], . . . , v[sub]n[/sub]) = w

in the target space W. Moreover, (and this is what makes M multilinear) if you fix all but one of the vectors from the source spaces, M acts linearly on the remaining argument. That is, if you fix

v[sub]1[/sub], . . . , v[sub]i − 1[/sub], v[sub]i + 1[/sub], . . . , v[sub]n[/sub]

then the vector

M(v[sub]1[/sub], . . . , v[sub]i[/sub], . . . , v[sub]n[/sub])

in W is a linear function of v[sub]i[/sub].

The final new concept you need is that of the dual space of a vector space. If V is a vector space, then the dual space V* is just the vector space of linear functions from V to F, where F is your field of scalars (usually the real or complex numbers).

This is all we need to define a tensor. Fix a vector space V. Then a tensor T is a multilinear map whose target space is the field of scalars, and whose source spaces consist of several copies of the dual space V* followed by several copies of V itself. That is, the sequence of source spaces looks like V*, . . . , V*, V, . . . , V, and the target space is F, the field of scalars.

The mathematician in me wants to say that this final definition is only correct when V is finite-dimensional (so that we can identify the tensor product (and V itself) with its double dual). But this thread isn’t really for the mathematician in me.

Still, I think it’s worth noting that the interesting thing about tensors is that they themselves form a vector space. So another way to put it is that, for any bunch of vector spaces A, B, C, etc. [as few or as many as you like], there is some essentially unique vector space T [the “tensor product” of A, B, C, etc.], with the property that linear maps out of T naturally correspond to multilinear maps out of A, B, C, etc. That is, for any multilinear map from A, B, C, etc., into some space Y, there is a corresponding linear map from T into Y, and vice versa, and furthermore this correspondence is “natural” in the sense that postcomposing the same linear map to corresponding maps yields corresponding maps. So with the aid of the tensor product space, instead of having to worry about multilinear functions as such, we can readily treat everything as just a linear function on the right domain.

And, in general, when scientists speak of a “tensor”, they mean an element of the tensor product of a bunch of copies of some particular vector space and of its dual space.