What is a matrix used for?

No, they really do have certain things kept fixed under coordinate transformation; coordinate rotations, as I recall, are good enough for this purpose. Some examples of things which are preserved under coordinate transformations are the length of a vector, or the angle between two vectors. So no matter what set of coordinates I measure it in, the dot product of two vectors is the same.

For instance, let’s say I have a vector whose components are [1,2,3]. If I change coordinates by x->y, y->z, and z->x, I get in the new coordinate system [3,1,2]. This is still the same vector, of course, and it has the same length (sqrt(14, in this case).

Distinguish that from, say, the first entry of the above vector, which in one coordinate system is [1] and in another is [3]. Because it changed (and changed norm) under a coordinate transformation, it wasn’t a scalar.

That’s what we mean by vectors and scalars in physics, at least: things with respectively 1 and zero indices which transform in a certain way under coordinate transformations. Hopefully someone better at explaining this sort of thing than me will stumble on through this thread and be able to provide a more useful explanation of what I mean. Help me, Obi Wan Chronos!

Eigenvectors.
Eigenvalues.
Make a poor biologist cry.
Make him whimper, moan, and sigh,
Eigenvectors.
Eigenvalues.

Eigenvectors.
Eigenvalues.
Why won’t the world recommode
To a monovariant mode?
Eigenvectors.
Eigenvalues.

gr8guy, matrices that preserve certain properties of a set of vectors or points are called affine transformations. Things like, as you say, rotation, scaling, and translating a vector don’t change the colinearity of points (three points in a straight line will remain in a straight line) or the ratios of distances (a point midway between two points will remain midway between the two points). These properties, in turn, lead to angular invariance and so on. (I didn’t get this off the top of my head, I Googled “affine transformation” and reminded myself. I’ll stop with the pedantry and let you, gentle reader, do the same if you’re interested.)

zoid, sorry if I implied otherwise. You’re a few months ahead of me in book-larnin’; I don’t know what that says about my calibre. (Insert Dr. Evil quote here.) Incidentally, if I screwed up any of those affine qualities, I’d be happily corrected.

Here’s something interesting, moejuck: in two dimensions (and therefore with 2x2 matrices), rotation and scaling are represented as multiplications, but translation is represented as addition. In other words, to take the point [x, y] and rotate it about the origin, I can multiply by a rotation matrix as Chronos so ably demonstrated. Similarly, I can multiply by a scaling matrix to change [x, y] to, say, [ax, by]. But to move [x, y] I need to add to it (getting [x+a, y+b]. This is a pain in the butt because for multiplicative transformations, several transforms can be multiplied together to create an overall transform (think, say, a rotation by 90 degrees, a scaling by a, and another rotation by 90 degrees being represented in a single 2x2 matrix). Additive transforms throw a wrench in the plans.

What did the mathematicians do about this? They moved to 3x3 matrices. It turns out that additive transforms can be represented as multiplicative transforms if a dimension is added to the mix. This means that computer graphics, which is generally 3D, uses 4x4 matrices to represent all transformations.

So now we have the general form of R as a 4x4 rotation matrix, T as a 4x4 translation matrix, and S as a 4x4 scaling transformation matrix, and we can transform the point A=[x, y, z, 1] (note the fourth dimension added, always a 1 for reasons I won’t go into here) as we please with:

A’ = T2 * R3 * T1 * S1 * R2 * R1 * A

…or any arbitrary sequence of (affine) transformations, generally premultiplied in the graphics world. (I’ve numbered the transforms “backwards” because the transform closest to the point affects it first.)

If this stuff really interests you, you might even check out a book on computer graphics; many of them go into the matrix transformations used in creating a scene based on individual elements and the rotations, transformations, and scalings that are used to put them where they belong. That’s how I learned all this stuff.

nitpick 1: A scalar is not quite the same thing as a 1x1 matrix (eg. a 1x2 matrix can be pre-multiplied by a scalar but not a 1x1 matrix IIRC)

nitpick: I wouldn’t describe matrices as ‘fun’ but a necessary evil :smiley: More exactly, there are many cool things you can do with them which are nigh on impossible without them, but before you can you have to learn the rule for inverting them, which is an effing pain.

Rule? What rule? I know of dozens of algorithms[sup]*[/sup], but no rule. Did I miss something in all those years?

[sup]*[/sup]What’s that? You have a symmetric positive-definite matrix, you say? Let me interest you in a Cholesky (LL[sup]T[/sup]) factorisation, or perhaps a little LDL[sup]T[/sup] number, if you prefer to avoid square roots.

No? How about something from Gauss, Crout or Doolittle, then? Would you like pivotting with that? Partial or full?

The dictionary? :slight_smile:

From dictionary.com: rule – n., 5. Mathematics. A standard method or procedure for solving a class of problems.

By all means, RM Mentock, enlighten me on what this standard method or procedure is.

What’s wrong with some of the ones you mentioned?

Becuase they are a class of methods, not a standard method or procedure.

So your objection is that there are more than one, not just one?

Well, yes. “The rule for inverting them” doesn’t exist.

Same as there’s no rule for finding the roots of a function, and there’s no rule for finding the maximum or minimum of a function.

There are methods, but no rule.

No rules, I like the sound of that. I’m going to suggest it as the theme for our next math dept colloquium.

No offence taken:)

Your example’s flawed. Had you chosen a 2x2 matrix, you’d be right, but you can left multiply a 1x2 matrix by a 1x1.

However, the ring R and the set of 1x1 matrices over R are isomorphic, so we can regard them as the same when it’s convenient to do so (and not against the rules).

You can also use matrices to solve first order linear differential equations. They are also useful for transforming higher-order linear differential equations into a system of first order linear equations.

This has all gotten a little out of hand for the question that was posed. I will have to take it from here on my own. I hope Desmostylus and McKlentock can solve thier disputes without resorting to some wierd form of “matrix” type violence.