# Is there a difference between a 1x1 matrix and a scalar?

On one hand, matrix math only works for MxN times NxO matrices, so a 1x1 times NxM doesn’t work for N not equal to 1 if you don’t handwave them to scalars, which can be multiplied to any matrix. On the other hand, I’ve had several math books that seem to treat x[sup]T[/sup]x as the same as (x dot x). HOWEVER these same books have never had the scenario (x[sup]T[/sup]x)M where M is an NxO matrix with N=/=1.

I tried googling it, it’s filled with non-consensus. Yes, I know you can define operations however the hell you want, I’m asking about if there’s a general consensus on this for most purposes.

I’ve heard Matlab treats 1x1 matrices as scalars, so there’s one data point, I guess, but that may be for technical reasons rather than rigorous math reasons.

I’m not an expert at all, honestly, but having come from a physics background and taken some linear algebra, I wouldn’t consider them to be philosophically the same thing, but I can see how computationally it would work to say that a 1x1 matrix is a scalar.

Yeah, as a Comp Scientist and programmer, =x feels wrong. Especially if you stick to the strict definition of matrix multiplication with MxN times NxO I mentioned. On the other hand, I’ve seen the x[sup]T[/sup]x = x dot x (which implies [r]=r where r=x dot x) identity fairly often.

nm

It all comes down to purpose though… Or more precisely, context.

You can’t really answer that question without considering context.

It’s been a while since I’ve used MATLAB, but I think it’s more accurate to say that it treats scalars as 1x1 matrices. The “Mat” in MATLAB stands for “matrix”, after all.

They’re different objects by the usual construction, but there is a canonical (i.e., obvious) isomorphism between a field F and the 1x1 matrices with entries in F. (F here is usually taken to be the real numbers or the complex numbers, but need not be). It is often convenient to blur the distinction between the two.

If you’re comfortable with saying “there is an isomorphism between X and Y means X is Y”, then you have no problem with saying “{1x1 matrices over reals, matrix addition, matrix multiplication} is the same as {real numbers, scalar addition, scalar multiplication}”. Of course then you will also be happy to say the same thing replacing “1x1 matrices” with “5x5 matrices with all off-diagonal entries zero and all diagonal entries the same”. And you will have no problem saying “2x2 matrices with both diagonal elements the same and one off-diagonal element the negative of the other are complex numbers”.

There are people who have no problem saying all these things – that the form of the elements of a system doesn’t matter, only the relationships between them. If every true statement about one system has a corresponding true statement about the other, who can say whether we are looking at one or the other.

Conversely, someone who denies this might have a problem with the statement “the rationals are a subset of the reals”. If you define reals as equivalence classes of Cauchy sequences of rationals, then a rational is clearly not an equivalence class of Cauchy sequences of rationals, so it can’t also be a real. It’s only by noting the isomorphism r -> the equivalence class of Cauchy sequences containing {r,r,r,r,r,…} that you can say “a subset of the reals is isomorphic to the rationals”, and if you accept the statements above, “the rationals are a subset of the reals”.

When do you care about the differences?

For one, when you’re programming. Most programming languages will treat a scalar as being a fundamentally different sort of object than a single-element array. Likewise, many will also distinguish between integers and rational numbers with zero fractional part, though that distinction can be a bit more flexible.

This is a bit like asking whether the integer 5 and the complex number 5 are the same. Indeed, it’s exactly like asking that…

Going along with leahcim, category theorists sometimes call asking (or, worse, answering) this sort of thing “evil”; they say you shouldn’t be talking about some ill-defined notion of equality, but about isomorphism instead, where the answer is cut-and-dry (once you specify what structure you consider relevant for the isomorphism).

You might say there’s two different notions of multiplication: matrix-by-matrix multiplication and matrix-by-scalar multiplication. (Indeed, you probably say that already…). And then it’s fine to identify scalars with 1-by-1 matrices, just understanding that they can be multiplied by other matrices by two different (but equivalent) operators.

Well, there’s obviously an isomorphism. But let’s say I’m writing an equation or solving a problem, should I write

w[sup]T[/sup]wX
or
(w dot w)X

Where w is a column vector and X is a NxM matrix with N=/=1.

If we accept we can say that the matrix and the scalar are equal, there’s no problem with either – if we say that the matrix and scalar are not equal, suddenly the former becomes impossible while the latter remains good. You may say “play it safe, use the dot product then” but like I said, I’ve seen the w[sup]T[/sup]w=(w dot w) identity many times. Is this just because, if no other operations are present, it can be losslessly transformed to a scalar or can it operate as a scalar even within a single expression with no need to be “converted”?

ETA: In fact, the former completely breaks associativity of matrix math, you HAVE to do the vector multiplication first.

ETA2: Oh, you ninja’d me, I guess it makes sense that you can think of two identical looking multiplication operators with different context-sensitive functions if you want.

Right, there’s a question, within any particular formal system, of whether the two are defined as the same thing (that is, whether the isomorphism needs to be explicitly invoked, or is implicit). This will be quite dependent on the caprices and whims of the setup of that particular formal system. But you might well ask about the conventions, as you’ve done. I think the answer to the question of conventions is just as you might have feared: there’s no great consensus. Some people in some contexts are perfectly happy to take the canonical isomorphism as implicit and some aren’t.

Similarly, in some contexts, some people will be perfectly happy with my writing a scalar λ to mean the 5 by 5 matrix which is λ times the identity, and some will demand I write “λI”, and you could even imagine some rare souls demanding something like “λI[sub]5[/sub]”. It’s that sort of thing.

Ummm . . . I don’t think so. For example, what is 5 x (3 + 2i)? Now what is (5 + 0i) x (3 + 2i)? Now look at the OP’s question and what is:

5 x [2 -2]
[3 8]

and

 x [2 -2]
[3 8]

Hint: can you even perform the second multiplication?

Sure you can… just as in the first example, use the interpretation of “x” as “The operation which multiplies a scalar by a matrix” rather than as “The operation which multiplies an m by n matrix by an n by r matrix”. Yes, this means there are various operations around named “x”; everyone already acknowledges that there’s various multiplication operations around in this context.

Also, going back to what leahcim remarked upon, is that there’s nothing special about 1-by-1 matrices, as such. The real question is whether we should identify “1” with “identity matrix”, which we might do for any particular size of matrix we care about. And some people are happy to do this in some contexts (surely we’ve all seen it…), and some people are not.

(And in just the same way as your argument, of course, there are programming languages which will not let you multiply integers by complex numbers, or add integers to rationals, or what have you, without inserting an explicit isomorphism…)

Basically, some people are happy to identify these, and some people are not, and even those who are happy to identify these are sheepish to write a multiplication of matrices whose dimensions don’t match, even though they continue to feel no shame at the implicit identification in contexts where this is not being done. And no one ever actually asks the question “Are they the same?”. They’re the same whenever people want them to be. Those are the empirical facts.

And if you’re paying attention, people who argue PEMDAS is magic, this sort of thing is exactly why we argue that mathematical notation is frustratingly ambiguous even at the multiplication by juxtaposition level.

Oh, but it is: It enables me to say that if you get an answer different form me, you must have failed basic arithmetic and are therefore a tosser and, in fact, a wanker!

See? Magic! 