I got to wondering about this recently. What algorithms are used to compute matrix determinants and inverses in situations where a fast computation is needed? Clearly, the definition of a determinant is no good, and all of the inverse formulas I can think of require that the determinant be computed.

I know that there’s like a whole subfield of information science that deals with this sort of thing, but I think I can wrap my mind around a 2 × 2 with no additional background. There is no faster way of computing the determinant of a 2 × 2 than ad - bc, is there?

Probably not.

One fairly simple way for computing a matrix inverse that doesn’t involve determinants (but also isn’t the fastest – just simple) is to take the matrix A augmented with the identity I:

A:I

and perform a linear-algebra gaussian elimination procedure on A, and whenever you do any row-adds or row-scales or row-swaps, you perform the same operation on I. Eventually, A gets reduced to I, and I becomes Ainverse.

I:A^-1

e.g.:

[1 2]

[3 4]

[1 2 : 1 0]

[3 4 : 0 1]

[-3 -6 : -3 0]

[3 4 : 0 1]

[-3 -6 : -3 0]

[0 -2 : -3 1]

[-3 -6 : -3 0]

[0 6 : 9 -3]

[-3 0 : 6 -3]

[0 6 : 9 -3]

[1 0 : -2 1]

[0 1 : 3/2 -1/2]

So the inverse of:

[1 2]

[3 4]

is

[-2 1]

[3/2 -1/2]

This can be seen by multiplying those two matrices together (you should end up with the identity matrix).

There are optimizations you can do based on the sparseness of A, whether it is triangular, etc.etc, but I’m not really an expert in the field. Do a google search on ‘matrix inverses’ or ‘computing a matrix inverse’, ‘computing the inverse of a matrix’, etc.etc.

Did someone inverse thier determinant again? I hate when that happens. These threads are the ones that make me feel like a complete moron.

Apologies if this reposts.

I don’t know enough about the complexity of the operations, but I do recall learning a few things that might be applied. I think LU (Gaussian elimination) or Cholesky decomposition may be what’s needed if all you want are inverses or determinants. I also seem to recall QR & Householder but I think that makes it tridiagonal or diagonal, so it has its applications but may end up being overkill.

It also seems to be that there’s more optimization that can be done with sparse or dense matrices (obviously, I suppose) and a more efficient algorithm may depend on such characteristics. (A quick search reveals at least one highly targeted algorithm.)

And I never studied them, but I think there are even approximation methods used for this sort of thing, which might be what’s used when really fast results are needed.

I’m pretty sure that there aren’t any optimizations specifically for dense matrices, since any matrix can be considered dense, it just happens that some of the numbers are zero. Sparse matrices, though, do often have optimizations. For instance, one commonly encounters matrices which are “band diagonal”, which is to say that the only nonzero elements are within some distance of the diagonal (tridiagonal is a special case of this).

In Numerical Recipes, they give a couple methods to obtain a general matrix inverse (using Gauss-Jordan elimination and LU decomposition). They calculate a matrix determinate by first performing an LU decomposition. Both of these have an operation count of order N[sup]3[/sup]. See Chapter 2 of any version of the book for a pretty good explanation of these.