Is there a difference between a 1x1 matrix and a scalar?

I don’t know Haskell, but in C, you can operate integers together with floats, and it’ll give you a warning when you compile, but it’ll probably give you something resembling what you were thinking of, but if you try to use any operation designed for numbers on a matrix of any size, it’ll complain mightily, and if it compiles at all, it’ll give you a result that’s completely unanticipated.

In C, you have to craft a program out of stone knives, bearskins, and sabres that are sharpened on both ends. I love the language, I do, but some aspects of it are more similar to juggling chainsaws on a freshly-waxed floor while wearing new socks than to what most people imagine as developing software.

OCaml is probably the most extreme example here: Its type system doesn’t allow for any promotion from integer to floating-point; it even has different functions for integer and floating-point arithmetic (e.g.: * is integer multiplication, *. (that’s ‘star-dot’) is floating-point multiplication). Haskell has a type system that is, in many ways, even stricter than OCaml’s, but Haskell does auto-promotion and only has one set of basic mathematical functions.

What kind of auto-promotion does Haskell do? You can’t + an Int and a Float, an Integer and a Float, a Float and a Double, or even an Int and an Integer in Haskell… You can only + two things that are of the same type, though, yes, that type can be any Num type.

(Where this gets confusing is that you can write, say, “3 + 5.1”. But that’s because 3 has the polymorphic type “Any Num type”, and the 5.1 has the polymorphic type “Any Fractional type”, and these two can be +ed if both are instantiated at the same type, for any type which is both Num and Fractional. But nothing is ever auto-promoted; in this case, the 3 was never of an Integral type. By way of contrast, you can’t write something like “length [1, 2, 3] + 5.1”, because length is defined to return Ints and 5.1 cannot be an Int.)

You make a good point; what Haskell does works a lot like auto-promotion but there are some differences, as you pointed out in your inevitable follow-up post.

So, compared to OCaml, Haskell looks like it has auto-promotion. That’s probably what I should have said.

As a working category theorist I have to go along with this, although I have never used “evil” this way (I’ve seen it though). It all depends on what you mean by “different”. As someone pointed out you cannot multiply a 1 x 1 matrix times an n x m matrix unless n = 1, so there is a difference.

Thinking about it, I would prefer to keep them separate. There is a similar question of whether a one dimensional vector space is the same as the field of scalars. Also whether an n-dimensional vector is the same an n x 1 matrix. It all depends on what you want to do with them. Context always matters.

The point is, some of these isomorphisms (like real numbers to complex numbers with imaginary part 0) are so “obvious” that no one even questions treating them as equalities except to make an obscure point about category theory. Others, like the one between complex numbers and a subset of 2x2 matrices are sufficiently obscure that no one would treat it as equality, except to make an obscure point about category theory.

Treating 1x1 matrices as being identical to the underlying field is somewhere in between the two extremes.

How is that isomorphism if one produces an answer and the other doesn’t? What’s the point of even calling it multiplication unless you are shooting for the same answer?

All this talk seems to go the route of saying that any symbol can have any interpretation, but that negates all of math. You need prescriptivism to make sense of it all.