 # Linear algebra problem

Hoping for some help with some maths. I’ve been playing with a problem which I’ve managed to get into a mathematical form and was wondering if anyone could help me make it look more like matrix equations.

I’ve managed to get a matrix A, where A[ij] = A[ji]

I know that fo my matrix, the condition A[ij] = A[ic]*A[cj] should also hold where c is some constant (say 1).

Is there a matrix way to test that the condition does indeed hold?

What have you found A to be?

I’m not quite sure what you’re asking - why can’t you just multiply together A[sub]ic[/sub]A[sub]cj[/sub] and see if it does equal A[sub]ij[/sub]?

It’s probably possible to fiddle around and get some product of matrices which would express this notion, but I’m not sure what you’d gain.

It would help if you could define your terms. What is [ij]? Rows and columns of matrix “A”? Your notation is unconventional to me. Perhaps you want to talk about matrix A multiplied by a scalar, c? Also, when multiplying matrices, are you trying to perform dot product multiplication, or cross product multiplication?

If A is a matrix, is [ij] also denoting a matrix? And, if so…what is

One quick tip: allowing c=1 may be a bad value to pick since 1 is the identify value when mutliplying; hence, it may seem to work out so nicely only to prove absolutely nothing. - Jinx

I assumed A[ij] meant A[sub]ij[/sub], that is, the (i,j)th component of A, though I agree it’s not standard - I couldn’t see any other interpretation that made sense, and that one looked a reasonable notation.

OTOH, I’ll probably have egg on my face.

I just hope he comes back and tells us, and it wasn’t for some homework he’s done another way now… It looks to me like he means for A[ij] to be one component of his matrix, the one in row i and column j (or vice-versa, but since his matrix is symmetric, it doesn’t matter) (I think this notation is used in some programming languages). In that case, he doesn’t need to worry about what kind of multiplication he’s using, since all of the things he’s multiplying are scalars, and there’s only one kind of multiplication on scalars.

Your first condition is a fairly common one, that the matrix is symmetric, but I can’t think of ever having seen a matrix where the second holds (not saying it’s impossible, but it probably doesn’t have a name or specific methods). It seems vaguely like a multiplication table for a finite group, though.

Shade, as for just testing all of them, it may be that the matrix is too large to do that conveniently, especially if he’s working on pencil and paper rather than a computer, and especially if he doesn’t know yet what c is.

By the way, if you don’t know c, you might be able to find it by looking at the diagonal elements, since A[cc] must be 1.

Or zero?

I think so, though wouldn’t that require the whole matrix to be zero?

Oh, right, so it would.

Sorry to be unclear. First let me deal with a mistake in my original post: it should read A[ij] = 1/A[ji] not A[ij] = A[ji].
Indeed Shade, I did mean that A[ij] was the (i,j)th component of the matrix. Sorry for not being able to do subscripts- didn’t realise that my notation would be so unclear.

c can be any constant (but obviously less than the size of the matrix). One other thing, the matrix must be square (ie. the number of rows and columns is equal), but that’s probably obvious already.

Ah, in that case, all of the diagonal elements are ± 1, so that probably won’t help much in finding c.

This problem is driving me somewhat nuts, by the way. It seems like I should be able to prove something interesting about this matrix, but I’m not sure what.

Just as a random thought, if we have some never-zero function f(i) on the set of indices, then the matrix A[ij] = f(i)/f(j) will match the OP’s constraints for any choice of c, but I’m not sure if there are any matrices matching the OP’s conditions which don’t satisfy that.

I’m not certain that I understand the question, or that my reasoning is correct, but I think you’re dealing with the outer product of two vectors, vectors that have the property that the elements of one are the reciprocals of the corresponding elements of the other. The ith row of your matrix along with the ith column constitute such a pair for any i (and your c is not unique; if it works for one c, it works for all). All of the diagonal elements of the matrix must be 1 (they’re equal to the product of something and its reciprocal). Like any non-zero outer product, this matrix has rank one (any row is just a some constant times any other, and likewise for columns). It has just one non-zero eigenvalue, whose associated eigenvector is any column. The value of this eigenvalue equals n for an n-by-n matrix (one way to see this is through the trace). If you divide the matrix by this eigenvalue, you get an idempotent matrix (it equals its own square).

But any of this may well be wrong.

I’m not certain that I understand the question, or that my reasoning is correct, but I think you’re dealing with the outer product of two vectors, vectors that have the property that the elements of one are the reciprocals of the corresponding elements of the other. The ith row of your matrix along with the ith column constitute such a pair for any i (and your c is not unique; if it works for one c, it works for all). All of the diagonal elements of the matrix must be 1 (they’re equal to the product of something and its reciprocal). Like any non-zero outer product, this matrix has rank one (any row is just a some constant times any other, and likewise for columns). It has just one non-zero eigenvalue, whose associated eigenvector is any column. The value of this eigenvalue equals n for an n-by-n matrix (one way to see this is through the trace). If you divide the matrix by this eigenvalue, you get an idempotent matrix (it equals its own square).

But any of this may well be wrong.

Thanks Josh_dePlume.

A^2 = nA is what I’m looking for.

I probably should have noticed that I had effectively defined the square of the matrix earlier (doh!)

Thanks a lot.

Whenever you see n in the context of a square matrix, it usually means the dimension of that matrix. Better to use another letter, like [symbol]g[/symbol].

Now, given a matrix A with a[sub]ij[/sub] = 1/a[sub]ji[/sub], and some c such that a[sub]ij[/sub] = a[sub]ic[/sub]a[sub]cj[/sub], here are some things I notice:

For some constant [symbol]g[/symbol], A[sup]2[/sup] = [symbol]g[/symbol]A. If A is invertible, g = det(A).

A is not necessarily invertible. The n x n matrix with a[sub]ij[/sub] = 1 satisfies the conditions of the problem, and is not invertible. Furthermore, it has determinant n.

If A is invertible, then det(A) is equal to the sum of the entries on the cth column.

a[sub]cc[/sub] = 1.

The 2 x 2 matrix Q, with q[sub]11[/sub] = q[sub]12[/sub] = q[sub]21[/sub] = 1 and q[sub]22[/sub] = -1 satisfies the conditions of the problem with c = 1, but not with c = 2. There goes that conjecture.

Of course, none of this gives you an easy method to check for the existence of c. The condition that a[sub]cc[/sub] = 1 helps, but it won’t get you all the way. I’ll look at the problem some more.

Quoth ultrafilter:

Q does not satisfy the conditions. For any matrix that does,
a[sub]ii[/sub] = a[sub]ic[/sub]a[sub]ci[/sub] = 1 for every i, but q[sub]22[/sub] = -1.

Strike that, then.

He does mean the ‘dimension’ (order) of the matrix. This follows from the stuff about eigenvalues above

A matrix that satisfies the conditions is never invertible, unless it’s a 1x1 matrix (in which case, because the matrix is 1, g = det(A) is true, but not very useful). This is so because the matrix has rank one (see above). Your g is always equal to the trace if the matrix. But the diagonal elements of the matrix must all be 1, so the trace equals the order of the matrix, n. So g=n.

I need to see those bits spelled out in a little more detail.

For n = 3, every A which satisfies the conditions has a form:

``````

[ 1    a   ab]
[1/a   1   b ]
[1/ab 1/b  1 ]

``````

for some a and b. Furthermore, it has the same form for c = 1, 2, or 3. I’m guessing that’s no coincidence, and that for n in general, A will have n-1 free parameters. Indeed, for n = 4, c = 1, I get:

``````

[ 1  a  ab abd ]
[    1   b  bd ]
[        1  d  ]
[           1  ]

``````

Ah, I see on further reflection that the form I gave is equivalent to this statement by Chronos, and also to the one given in the following two posts by Josh_dePlume. I concur with you guys. 