What Are Multiplication and Addition?

I originally intended to ask about multiplication, but just realized the question applies analogously to addition.

There are operations that get called “multiplication” which operate over various fields, groups, and whatnot. (Yes, I just read a popularization of difficult math concepts which I barely understand.*) I am confused about the following question. What makes all these operations the same kind of operation? What makes them all “multiplication?” (Same question goes for addition.)

So, for example, why is it that the operation we term the “multiplication” of matrixes is called “multiplication” and not, say, “matricular shuffling?”

How was it decided that what we’re doing to the matrixes is in fact multiplication and not some new operation.

And so on to all the other fields/groups/etc. over which a multiplication or addition operation is defined.

Thanks for any help!

-Kris

*Unknown Quantity by Derbyshire.

The use of the terms probably arose by analogy with the familiar operations over R (or Q, or Z, or N as appropriate). There’s undoubtedly a category-theoretic view that does tie them all together, but I’m not familiar with it.

As ultrafilter said, at some level they’re defined by analogy. However, a helpful way to think about it might be the following: A ring is a set with two operations, + and . Part of the definition of a ring is that the addition and multiplication laws are distributive, i.e. that a(b+c) = (ab) + (ac), and (b+c)a = (ba) + (ca) for all members of the ring. In some sense, then, this axiom defines which of the two operations is “addition” and which is “multiplication”, since it will generally not be the case that the distributive laws are satisfied with the operations reversed. (In other words, we generally don’t have a + (bc) = (a+b) * (a+c).)

(Aside: actually, that’s an interesting question — do there exist any non-trivial rings which are “reverse-associative” in this sense?)

It’s not a ring, but that’s one of the properties of a boolean algebra.

Also, all operations called “addition” are commutative (that is, A + B = B + A), but not all multiplication operations are (such as the vector cross product: A cross B = -(B cross A)). So if you’re looking at a new operation, and don’t know or don’t care whether it commutes, it’s safer to call it a “multiplication” operation than an “addition” operation.

From the layman’s point of view about why matrix multiplication is called “multiplication.”

I assume this was all in the popular article you read but I’ll just repeat it.

The only matrix multiplication that is defined is that of a row times a column.

For example:


|a11  a12| 

multiplied by

         |b11|
         |   |
         |b21|

The product is a11b11 + a12b21 which is a scalar, or ordinary number in this case.

Now suppose I want to multply 2 by 2 matrices.


|a11   a12|      and    |b11   b12|
|         |             |         |
|a21   a22|             |b21   b22|

In accordance with the definition of matrix multiplication given above I make the multiplication by multiplying the first row of |a| by the first column of |b| and put the resulting number in the first row, first column of the answer. Then I multiply the second row of |a| and the first column of |b| and put than answer in the second row, first column of the answer. And so on. The result then is:


|a11*b11 + a12*b21         a11*b12 + a12*b22|
|                                           |
|a21*b11 + a22*b21         a21*b12 + a22*b22|

This could be called “shuffling” for “merging” or anything you want. However, since the process involves multiplication why not call it that?

Addition is also involved but the process couldn’t be called that because a different operation, matrix addition, was already defined. Firthermore, as was pointed out, addition is associative and this process is not. |a||b| does not equal |b||a|

Using the method described you can show that for yourself.

Did I say that matrix multiplication is not “associative?” I of course meant “cumutative.”

I might add that the multiplication of a row matrix by a column is related to the dot product of vectors.

Vectors are directed quantities and require more than one number to describe. For example velocity is a vector. We say that he was going northeast at 45 mph.

Unit vectors at right angles to each other are defined, say j and k. j is horizontal and k is vertical on a graph.

Vector aj +ak would be a line from the origin to a point a units to the right and a units up on the graph. Using the Phthagerean Theorem is length would be √(a[sup]2[/sup]+a[sup]2[/sup]) or a√2. It would be directed up and right at an angle of 45º. In this notation our car going NE at 45 mph would be described as having a velocity of 31.8j + 31.8k.

A vector multiplication known as the dot, or scalar, product is defined. It is equal to the product of the magnitudes of the two vectors multiplied by the cosine of the angle between them.

For vectors given in the j, k form as above the dot product is (aj + bk)·(cj + dk) this equals:

acj·j + (bc+ad)j·k + bdk·k. The angle betwee j and j is 0 so its cosine is one and j is one so j·j is 1. Likewise for k·k. j and k are at right angles so their dot product is zero and the dot product of the two vectors equals

ac + bd.

If I now write the first vector as a row matrix and the second as a column and multiply


 the row matrix 
|a   b|

 by the column matrix 

|c|
| |
|d|

I get ac + bd which is equal to the dot product of the two vectors.

Still another reason to call it multiplication.

I could start by pointing out that 1 x 1 matrices are essentially the same as (isomorphic to) the field of scalars and for them the matrix multiplication is the same.

Here is a somewhat better explanation. The very first rings (in fact, the origin of the word “ring”) were the rings Z/nZ, the integers mod n. You could think of the elements as the integers 0,1,…,n-1 and define addition and multiplication as ordinary addition and multiplication followed by reduction mod n (divide by n, throw away the quotient and keep the remainder). Eventually, people realized that there were other examples of rings and they kept on calling the operations addition and multiplication and using + and juxtaposition for them (use of * came only with computers where it was necessary to have a sign found on a keyboard–earlier if a sign was needed, a centered dot was used). Eventually, it was considered desirable to drop the commutativity of the multiplication. The first such example was the quaternions discovered by William R. Hamilton in the early 1840s in order to understand rotations in three dimensions. Actually, Hamilton’s formulas didn’t really work. The first correct formulas for the rotation group were published by Olinde Rodrigues in 1840, but he didn’t use quaternions although it is possible to reinterpret them in those terms. His paper is actually available online. IIRC, it was in the Bull. Soc. Math. France.

At this point, mutiplication got identified with operations, at least in this case. Since the earliest groups were transformation groups, the group operation got to be called multiplication rather than composition. When groups get abstracted, they continued to call the operation multiplication.

A matrix can be viewed as a transformation on vectors. If A is an n x m matrix and v is an n-dimensional column vector, then Av is an m-dimensional column vector and the ring of n x m matrices can be viewed as the ring of transformations of n-space to m-space. Moreover, if B is an m x q matrix, then the associativity B(Av) = (BA)v can be viewed as the statement that B o A = BA, which means the result of the composite transformation apply A, then B is the same as the transformation by BA.

Finally, I will just mention in passing that all these things have been generalized beyond belief. There are non-associative rings (often with different identities assumed), rings with non-commutative addition and whatever you can think of. Also rings with no identity (sometimes called rngs) or with addition and multiplication but no negation (sometimes called rigs). Not to mention “rings with many objects”.

It doesn’t have to be associative either. Consider matrices acting on octonionic spaces.

If you’re asking “what are they?” in some sort of existential sense, there are a lot of answers.

But you seem to be asking “Why are the various ‘multiplications’ all called multiplication?” In that case, I have to point out that they aren’t always. In fact, many of us refer to the general concept as “composition”, since the most general motivation is the composition of functions rather than the multiplication of numbers. And yes, pretty much all “multiplications” can be rephrased as compositions of functions.

:eek: I guess you missed the word “layman” in my first post. I had to walk all the way upstairs to find out that “octonionic” isn’t in the International Dictionary either.

OK I’ll consider it but I don’t expect it will do me much good.:smiley:

Mathochist made up that term to make you feel dumb.

:wink:

There are examples of incomprehensible posts without made up words. Some of them right in this thread. :slight_smile:

Octonions do indeed exist. They are octets of real numbers, much the same way complex numbers are pairs of real numbers: Real numbers pair up to form complex numbers, which pair up to form quaternions, which pair up to form octonions.

Quaternions are associative but not commutative. Octonions are neither associative nor commutative. Very few people like octonions; they don’t get invited to many parties. :wink:

Indeed. When I learned of quaternions, I started playing around, trying to create octonions, but I lost interest when I realized that they couldn’t be associative. I mean, what use is an operation that isn’t even associative?

Well, OK, I guess that exponentiation, subtraction, and division are all non-associative. But that’s different.

Um… Lie algebras?

Except then there’s the fact that the exceptional simple Lie groups often have nice explanations as groups of rotations in octonionic modules. E[sub]8[/sub], for instance, is very big in string theory, so physicists are having to learn more math than ever.

You don’t go to many parties, do you? :stuck_out_tongue:

Can’t trust 'em. :wink:

DSYoungEsq: If you cross that river, a great empire will fall.

Seriously, I know every kind of number must have a damn good reason for existing. I don’t understand the majority of this stuff and I don’t pretend to, so I give links and make jokes to keep the peanut gallery chuckling and edified.

Are you kidding? What with the hours in the ivory tower how can you not go to parties, if only to keep from getting bored?

Oh wait, you’re working a real job? /me points and snickers