I am an elementary math teacher. I teach math only to 6th graders. I took 3 levels of calculus in college, none of which covered matrices. I know I can think mathematically, but the application of a matrix seems a bit out of my league. Perhaps if someone could offer me a worked out example, I might be able to understand more readily.Stypticus
moejack, imagine you are trying to solve the set of equations, 2x + 3y = 7 and 4x - y = 7 simulataneously, as I’m sure you’ve done many times in your math education history. In the language of matrices, this is the square (2x2) matrix [2 3; 4 -1] times [x y] equals [7 7], which is analogous to the simple equation 2x = 6, which you’d solve by multiplying both sides by the inverse of 2, one half.
To solve the matrix problem, you multiply both sides (on the left!) by the inverse of the square matrix (only square matrices have real inverses), and the result is [x y] = [2 1], and you can read off the solution to the original set of equations.
In order to do all that, you have to know how to multiply matrices and find their inverses, but the rules are straightforward and easily learned. The theory of matrices has been generalized to much more complicated applications, but that can come later.
As a statistician, I use matices mainly to simplify notation. I think this is true for other fields as well.
IOW, a matrix is simply a notational tool that allows (sometimes very) complex functions to be represented in simple forms. The shared understanding of the matrix notation allows concepts to be explained with many fewer words than would otherwise be required.
But I don’t think matrices are necessary; you can explain matrix functions and concepts in non-matrix terms, though this can get extremely cumbersome even for simple matrix functions or concepts.
If you spend that much time with math, moejuck, I’m sure you’re capable of learning some stuff with matrices. Here’s a recent thread dealing with the use of a matrix to solve a system of linear equations, which is probably the first application of matrices that most people see. There’s an example worked out in there, and if you read the whole thread I expect you’ll be able to figure out what’s going on. This isn’t a deep application of a matrix, but it’s a good place to start; I’d recommend picking up a basic linear algebra book if you want to learn more.
Well, sure, but in the same way, long division can be expressed as multiple iterations of subtraction. That doesn’t mean that division is not necessary–it’s just that it can be expressed in a different way.
Very few large systems of equations are solved by directly inverting the coefficient matrix, like I talked about it my previous post. But that is a starting point for understanding matrices, which are an extremely valuable mathematical tool. In that sense, the more valuable something is, the more “necessary” it is.
Basically, they’re the same thing at that point. The matrix approach keeps you from writing down so many characters and steps–although in certain instance, the algebra approach can be shortened if you notice certain things about the problem that allow you to take shortcuts.
The analogy to long division of polynomials and synthetic division is probably apt. Synthentic division is much quicker usually, but it’s just another way of doing long division. I can think of a lot of polynomial division problems that I could find the answer quicker without synthetic division, though.
moejuck, I didn’t mean to imply that you can’t think mathematically or that you aren’t a “smart person” as I so succinctly put it. I was addressing the issue that so many people assume a certain topic is “a bit out of my league”, when a little exploration can go a long way to changing that perception.
In many ways, matrices are easier to understand than calculus; there are certainly no limits involved, just a few operations that work a little differently from regular algebra (dot product, cross product, inversion, for starters). Matrix algebra, like all mathematical fields, is a tool to be applied to problems as appropriate. Usually, many tools will do the same job with varying degrees of ease. asterion, for two equations with two unknowns I do find it easier to just slog through the algebra. Ten equations in ten unknowns, or worse ten equations in twelve unknowns, I’m going to use matrices.
moejack, when stypticus starts using matrices to solve systems of ten unknowns, they probably don’t find the inverse of a matrix. There are other techniques that involve manipulating the coefficients as well as the constants that is very similar to what you’d do to solve it algebraically. There, the analogy to synthetic division is very apt.
Back in the dark ages (when I was in college) I just couldn’t wrap my mind around Matrix in Stats class, because I had learned it first in Geology and archeology. A Matrix in that field is the background material which encases the fossil or what-have-you. Another word for “matrix” is “womb” which made sense to me.
Another common use of matrices is to represent operations on vectors. If I express a vector as a column matrix (a matrix with one column and n rows), then I can use an (nxn) matrix to do an operation on that vector.
For instance, suppose we’re in a plane, so that vectors just have two components. I have a vector which I’ll call A. I want to rotate the vector A by an angle [symbol]q[/symbol]. I might call the thing I want “R[sub][symbol]q[/symbol][/sub]A”. OK, so that’s what I call it. But how do I do it? I use a rotation matrix to represent R[sub][symbol]q[/symbol][/sub]. That rotation matrix is
[ cos([symbol]q[/symbol]) sin([symbol]q[/symbol]) ]
[ -sin([symbol]q[/symbol]) cos([symbol]q[/symbol]) ]
Meanwhile, suppose my vector A has components A[sub]x[/sub] and A[sub]y[/sub]. Then my column matrix for my vector looks like
[ A[sub]x[/sub] ]
[ A[sub]y[/sub] ]
So the rotation R[sub][symbol]q[/symbol][/sub]A looks like
[ cos([symbol]q[/symbol]) sin([symbol]q[/symbol]) ] [ A[sub]x[/sub] ]
[ -sin([symbol]q[/symbol]) cos([symbol]q[/symbol]) ] [ A[sub]y[/sub] ]
which I can solve using the rules for matrix multiplication.
o.k. I think things are starting to come around for me. I at least understand the applications that a matrix can be used for.
There have been several references to the “rules” of working with matrices. May we continue with a discussion of what these rules are and how they work?
The most common matrix operations are addition, scalar multiplication, and multiplication.
Let A and B be matrices, and let k be a real number. It’s customary to denote the (i, j)th entry of a matrix P as p(i, j).
Assume that A is m x n (that’s m rows and n columns). We’ll denote kA as C. C is an m x n matrix, and c(i, j) = k*a(i, j).
If A and B are both m x n matrices, we’ll denote A + B as C. c(i, j) = a(i, j) + b(i, j).
For matrix multiplication, the number of columns of the left multiplicand must be the same as the number of rows of the right multiplicand. This means that sometimes AB is defined, but BA is not. Even if they are both defined, they’re not generally equal.
So you’ve got A, an m x n matrix, and B, an n x p matrix. Let C denote AB. c(i, j) = sum(a(i, k)*b(k, j), 1 < k < n).
I did not mean to imply that I don’t understand math (I have a Masters degree in Engineering so while not an expert, I do understand math to some extent) I am just appreciative of the calibre of the members of this board.