OK, I see what you’re talking about. I wouldn’t say that you’re using Cramer’s rule (as far as I can tell, it’s only Cramer’s rule if you’re solving a system of equations), but I do see the similarities. It’s where you compute the adjoint matrix and divide that by the determinant, right? I’d certainly agree that that’s a piss-poor way to do it.
Yes, I agree. But note that there are also efficient ways to compute both the determinant and the inverse. I guess what I should have said earlier is that the equation
Adj(A)*A = det(A)*I
is of fundamental importance and Cramer’s rule is derived from it.
It looks to me that the obvious way of computing both the inverse and determinant, using row reduction, is of order n^3, but I believe better methods are known that are better, maybe even close to n^2. In any case, matrix computations are highly parallelizable so a multiprocessor machine can do much better.
I found that throughout linear algebra, computing determinants was extremely useful, particularly for determining linear independence.
I found Cramer’s Rule quite useful for small matrices (no more than 3x3), and for finding transition matrices.
Of course, I’ve only taken Linear Algebra I and II, and have never used any of this in real world applications.
Regarding the time it takes to compute the determinant:
Using standard methods, such as Gaussian Elimination, it takes cubic time. (I.e., for nxn matrices time O(n[sup]3[/sup]). Note that this is also the same time to: multiply matrices, invert matrices, and solve systems of linear equations.
It was proven in a series of results by several people (including Aho & Ullman??) that in fact:
- Matrix multiplication and matrix inversion take the same order of time.
- Finding determinants takes the same or less order of time as matrix multiplication.
- Ditto solving systems of linear equations.
(4. Ditto membership testing for general Context Free Languages.)
(5. Ditto Boolean transitive closure.)
Hence, an improved algorithm for matrix mult. means the entire group immediately has better solutions. (But it’s possible that determinant can be done faster than mult.)
Volker Strassen was the first to beat cubic. He showed it could be done in time about O(n[sup]2.807[/sup]). While better, it was not sufficiently better in practice to see much real use. Victor Pan later improved this bound, but it was completely impractical. The constant in the superscript then started changing so rapidly that it became “alpha”. Here’s a page that gives alpha as 2.376. Again, none of these may be in actual use. Hence, inverses, determinants, etc all can be done with that exponent.
My Strassen story: He was visiting my thesis prof and attended the first seminar I ever gave on my own research results. It was a completely different area yet he was one slide ahead the whole way. Always asking questions about things that I had on the next slide. Amazing.
What a great webpage that is. IIRC he had to take that down a couple of years ago due to some legal constraint. I’m glad to see it up once again.
You RC, javaman. You can read about it in the MathWorld Author’s Note, or you can read all about it in Eric’s Commentary on the Shutdown of MathWorld. It seems to me like he’s still getting shafted, but at least the site is back.
About taking determinants, inverses, and Cramer’s Rule. I guess I see now that the most straightforward way is probably the worst, ftg. So what do you suggest? Are the O(n[sup]3[/sup]) methods a good balance of simplicity and efficiency? Or are the more efficient methods worth it?
Personally, I’d always go for a more efficient way, unless other constraints existed (for instance, Strassen’s algorithm for matrix multiplication is recursive and, IIRC, a memory hog). But that’s just me.