Lately I’ve been perusing an old college algebra text, as part of an effort to beef up my math skills, and I’ve come across Determinants. It appears that a Determinant is a way of representing a sum of products taken from the diagonals of a square grid of numbers. But what does that buy you? It seems just as difficult to evaluate such a sum in determinant form as it would be in any other way. Why are determinants used? What real world phenomena are they used to model?
Determinants are used to check whether or not a matrix is invertible. If a matrix is invertible, then the determinant != 0, and vice versa. I think they have something to do with eigenvectors & eigenspaces, but I’m not sure, I want to put algebra OUT of my head now!
One thing I used determinants for in my dissertation was to find natural frequencies of a vibrating system. Without going into too much detail, the partial differential equations and associated boundary conditions that model the dynamic behavior of a vibrating system can be formulated into a matrix. This matrix includes a frequency variable. The values of frequency that make the matrix singular are natural frequencies that solve the original PDEs. A singular matrix has a determinant of zero. So solving this problem computationally (which was what I did) just involves using a search algorithm to find frequency values that set the determinant to zero.
One very important thing to note is that the definition of a determinant is not how you compute it. Based on what you said, it’s also possible you saw one or more other definitions that are really, really bad ways of computing it. In Real Life people use things like LUD matrix decomposition which is significantly faster. (Strassen, Pan and others have even sped that up.)
A huge number of properties about matrices follows from their determinant and/or determinants of submatrices, e.g., positive definite. Just knowing that det(AB) = det(A)det(B) is a Big Help. My fave:
Eigenvalues and Eigenvectors: Given matrix A, vector x and constant L if
Ax = Lx
then L is an Eigenvalue and and x and Eigenvector of A.
Note that if you take the determinant of (A-IL) you get a polynomial in L whose roots are Eigenvalues. Now, Eignevalues are then in turn useful for determining the behavior of simple mechanical systems, electronic circuits, etc. (Which is why the previous posters are so keen on them.)
The determinant can be contrasted with the permament which is defined the same but with all +signs, no negatives. There is no known efficient way to compute the permament. If you can prove that an efficient way does/does not exist, you will be instantly world famous and get your name on the front page of the New York Times.
The question “What good are determinants?” is akin to asking “What good are negative numbers?” Trust us, they have an incredible number of applications.
Having just poked around the web I do find that there are many real-world applications, as intriguing as they are mysterious to me at the moment.
It motivates me to persevere.
Another place in which the determinant shows up is in the change-of-variables formula for multivariable integrals. The word you’re looking for is Jacobians, if you’re interested in searching.
Also, in my algebra class, we used matrices to evaluate systems of equations that had three variables - there is some rule or another that tells you how to work this but I can’t recall exactly what its name is right now.
Of course it comes to me the second I hit the “submit” button-
Cramer’s rule! (possibly with a K)
Do NOT ever use, learn or teach Cramer’s rule. It is so horribly inefficient it causes permanent brain damage to any who have seen it and are not aware of how Really Rotten of An Algorithm it is.
(It is quintic, while any intelligent method for inverses is cubic or less. I can’t imagine how even an idiot comes up with a method worse than quartic.)
The only way to really understand the usefulness of matrices and determinants is to study Linear Algebra. This is a pretty abstract subject, but it is amazing how powerful and magical it can be.
Lots of good responses so far, and there are many more. Here’s a quick one that may be easy to grasp, off the top of my head. Taking a determinant can tell you whether a system of vectors is linearly dependent.
There’s a type of transformation in the complex plane called something, though I don’t remember what. It looks like f(z) = az+b / cz+d. It turns out that the quantity ad-bc is immensely useful in describing this transformation, and behaves much like the determinant of a 2×2.
Eigenvectors and eigenvalues are also very useful in quantum mechanics.
And 3×3’s are a handy way to help you take the cross product, but this use for determinants is incidental.
Matrix inversion has already been mentioned, so here’s another practical application. Where I work we have big models of economies (around 6 million equations). To solve these equations we linearise them and put them into a big matrix. Inverting the matrix gives an approximation to the solution which can be then found by extrapolation. In an economic model such matrices tend to be very sparse (have lots of zero elements) so singular matrices (matrices that cannot be inverted) are a common hazard. Understanding why a matrix is singular helps you to play with the model to get a solution - for example by changing a few zeros to tiny numners.
Möbius transformations are what Achernar is thinking of.
Waitaminute, what does Cramer’s rule have to do with computing inverses? The way I learned it, you use it to solve systems of equations. For a system of n equations in n variables, you need to compute the determinants of n + 1 matrices, each of which is (n - 1) x (n - 1). I’m not sure exactly how long computing the determinant takes, but it’s hard to imagine that this algorithm is quintic.
My algebra text introduced second-order determinants as a method for solving systems of two equations in two variables, or third order determinants for three-equation systems in three variables. Assuming this principle can be extended to any number of variables and equations, I suppose yet another application would be that you could easily represent, in a computer, a much larger matrix than you could hope to handle manually, so in effect you can set up a program to solve horrendously large linear systems.
I’ll say. Last time I took the GRE I had spent a few weeks brushing up on rudimentary linear algebra–lots of 2-equation systems in two variables. As you might expect, it seemed like half of the math section on the test comprised equations of just this type. I was able to increase my math score by about 200 points…amazing AND powerful, all right!
Actually, this is a bit strong. There are certain theoretical purposes for which Cramer’s rule is important. Actually, it is not so much Cramer’s rule as the computation by minors that explains Cramer’s rule. But as a computational device it is dreadful. Why was it taught? Well, when I was a student, computation just wasn’t something we worried about.
I’ve seen Cramer’s rule used to find inverses. In the form I’ve seen that takes n^2 determinants of (n-1)x(n-1) matrices (plus one for the det. of the whole matrix). Since the overwhelming majority of students only get the “and this is how it can be done” part, they don’t know how bad it is. I can’t tell you how many times a Math student has wandered into my office explaining they are supposed to write a some matrix code and I see that it’s based on Cramer’s rule. It really does cause brain damage.
I’ve never seen that use of Cramer’s rule. So is finding determinants cubic? I looked on the web for a running time, but couldn’t find anything.
I have to agree with ultrafilter—I have never seen Cramer’s Rule used for anything but solving systems of linear equations, like it’s explained on Treasure Troves entry: Cramer’s Rule. I didn’t want to say anything early, because I’m a neophyte here. I can’t even think of how it would be used to find an inverse…