In what way is math distinct from notation?

This thread got me thinking about what exactly constitutes mathematics, especially how important notation is to the field.

In his “Lectures on Physics”, Richard Feynman said:

I tend to agree with him, but I’m finding it hard to express why. Here’s what I posted in the other thread:

Sorry, I’m finding it hard to express myself satisfactorily on this topic, which is the point of this thread. Therefore there is no clear debate question, but I thought the topic itself was debate-worthy.

So what do you think?

I hope Indistinguishable will participate in this thread. Until he arrives, however, I would say math uses the language of logic in order to quantify and order things. Math notation is an extension of logic notation. One could say, for example, that a biconditional implication (logic) is analogous to an equation (math) — not the same, but analogous.

Here’s my response to Ruminator’s post in the other thread:

But here you’re confusing math and science. Newton’s science was indeed driven by ideas. As you say, though, Newton came up with his dot notation for calculus specifically to deal with the radical new ideas he was coming up with in physics. To me, this looks like the ideas of science were incapable of being expressed in the mathematical notation of the day, and Newton had to remedy the situation by inventing a new notation that incorporated those ideas and built on them. I suppose you could look at it either way, though.

By the way, I have to go to school now, so it will be a few hours before I can get back to this thread, sorry.

I’m amazed at how often that happens, even here on this board. I’ve seen people dig in their heels and absolutely insist that they can prove 1 + 1 = 2 through a sort of scientific experiment that basically involves counting things and then making an induction about a general rule from their particular observations. It is truly exasperating to attempt reasoning with these people. It’s especially maddening because these people do know (or should know) the difference between an empirical claim and an analytic claim. It’s probably no coincidence that these same people demand proof of God and other metaphysical things in a scientific context. You might find eventually, as I did, that the first invocation of that sort of craziness is a big clue that you should jet. Otherwise, you’ll find yourself chasing a greased pig which, if even you catch it, will just squeal and blow snot on you.

Good notation is very helpful when you’re doing math. (Indeed, sometimes it makes hard problems easy.) But the notation is completely independent of the ideas behind it, and these ideas are the essence of mathematics.

You can do math in any language, and most early mathematicians did math without using any symbols. But this isn’t really suitable, and so problems that we find extremely easy today were very hard at the time. Different languages are good at expressing different things, and mathematics benefits from a symbolic language of the kind mathematicians use today, but it’s still independent from this language.

How do you substantiate that? IME, the ideas come first, and then the notation follows. In fact, from my own experience, the chosen notation is likely to change several times over the course of writing up a result, as you work out what works and what doesn’t.

Sure a better notation could save you space while writing, but it doesn’t change the underlying principles. All notation does is take an abstract concept and makes them easier to read. Functions for example, f(x). f(x) is really just shorthand for a concept in set theory. It’s easier to work with for certain problems so we use that notation. However, if we wish to understand exactly what a function is we would use set theory notation.

I don’t think I am. I simply used science as a narrative to frame the example. Let me zoom in on my 2 sentences related just to the “mathematics” portion and add a {BOUNDARY} to highlight what I’m trying to say.

At the {BOUNDARY} point of time, I’m saying that there is already new mathematics (calculus) without the dot notation. To me, this is not an academic distinction. Maybe Newton created the dot notation within seconds of creating calculus. Maybe he was using a different more cumbersome internal notation for weeks/years before finally settling on dot notation. Maybe he didn’t use any notation at all and just thought of dynamic changing quantities in his head. In any case, the calculus was always there as soon as he thought of it.

Newton had his “dot notation” and Liebniz had his dx/dt notation. I’m saying that calculus (as a workable concept) existed in their minds before either notation and is independent of the notation. However, calculus redefined as a “teaching topic” or a “foundation concept” for future ideas certainly can’t exist without the notation.

Richard Feynman is a genius but maybe he’s a little biased because he created the famous “Feynman Diagrams” notation for quantum physics. :slight_smile:

In any case, I think the “mathematics is notation” comment is correct in some ways but it depends on the audience. Telling a class of of high-school students that mathematics is “mostly about notation” is going to be counter-productive because they will misinterpret this message. At this early stage, many kids are already turned off to math because the typical school curriculum stresses the mechanics of manipulating notation – “moving symbols around, cancelling like terms, carrying the tens over to the left column, etc, etc, ad nauseum.” This is what some educators might call “fake math”. Real math (as some would define it) is recognizing and imagining patterns. This idea of genuine math thinking would make sense to a mature physics student so Feynman’s suggestion to embrace notation is not damaging to their ears.

As a programmer, not a mathematician, it certainly makes sense to me.

In the end, all of the code you are writing just becomes CPU instructions that you could have written out by hand, but ideas like functions and objects almost certainly radicalized the way in which code is written and allowed for many advances.

Code is code, but paradigms are what change the ballgame.

It was my understanding that the Pricipia had very little calculus notation in it. I believe it was written mainly in terms of geometry.

That’s true, but because of our mental limitations chunking of concepts through notation can help us see the relationships between these concepts more clearly. The ideal of math may be independent from notation, but math as practiced is all about notation, and undoable without the proper notation.

I lean towards the viewpoint of Dr. Cube, in thinking that notation can affect the underlying principles very much. For instance, we have different notations for Coxeter groups. You can write a Coxeter group as an old-fashioned graph presentation or Cayley graphs, but those who work on Coxeter groups have several notations designed just for Coxeter groups. For example, there is the presentation graph, where each generator is a vertex and each edge represents a relation. Based on this, we can define a splitter as a set that separates the presentation graph.

So, could splitters exist without presentation graphs? In theory, one could write a definition of a splitter in the notation of a presentation or any other notation. In reality, it would take a great deal of time and be incomprehensible. In reality there were no splitters before presentation graphs, and the idea of a splitter just wouldn’t come up without that notation.

(And I apologize for choosing such an esoteric example. It was just the first one that came to mind.)

I wrote a post some time ago on my own perspective of the role of math in life. Thread here from 2004. Note: I’ve not reread it, some details in my opinion may have changed since then.

In any case, the upshot of my view is that math is just the language of certainty. Under such a view, notation is just another means of expression, and we’d expect it to morph over time as our sophistication increases (or, we hope it does!).

Abuses of notation are an interesting to consider with regards to the OP’s subject.

Sure, but the idea of functions, objects, monads and whatever else came before the notation, not because of it.

There are certainly ideas independent of notation. But without the notation, they’re not mathematics. It’s the notation which makes the ideas math. Ultimately, all that math is is a set of symbols and rules for manipulating those symbols.

You’re a formalist?

Is good math to a large degree about good notation? Hm… I dunno. I don’t really feel like making pronouncements either way. But it seems like perhaps different participants in this thread have different ideas about what constitutes notation. So perhaps it’s helpful to the discussion to bring that out more explicitly.

For example, let’s look at the swath of variations possible in notation. It’s not just the trivial matter of selecting letters and accent marks to use to denote various operation, is it? Something like choosing to carry out a geometric proof with diagrammatic reasoning rather than words or laborious manipulation of corresponding serialized formulae is, after all, in some sense a notational choice, yet perhaps a very important one; consider also diagram-chasing arguments in category theory. But it’s not always so clear-cut what counts as notation. Does ITR’s examples of representing certain groups as edge-labelled graphs count as notation or is this some kind of pre-notational mathematical transformation? As a less obscure example in a loosely similar vein, is representing linear operations as matrices of real numbers a matter of mere notation or is there more going on here?

If “notation” means only “Once you already have the idea, how do you put it down on paper?”, then, by definition, notation is but an afterthought. But that perhaps is not the most apt definition of notation.

As for the OP, since it’s his hypothesis, perhaps he would care to put forth some more examples of notational breakthroughs? (I don’t think he would have spontaneously offered up Newton’s dot notation as an example of what he’s talking about)

Here’s one small example I can think of, and we can debate where concepts/ideas end and where notation begins (and to what extent there is overlap and feedback), which is, after all, the whole question: in the notation of the lambda calculus, one often writes functions anonymously: rather than first giving a definition “f(x) = x^2 + 3x + 7” and then using the name “f” later, one can simply write “\x -> x^2 + 3x + 7” to denote that same function. It’s a small thing, but this not having to cumbersomely use names to refer to every function one decides to talk about makes it much easier to manipulate functions in a higher-order manner; in a way, it reduces the psychological hurdle to passing functions as arguments to other functions, having functions return new functions, etc., while at the same time making clear which variables are bound in which way where (as an example of all this, we might give the definition Derivative(f) = \x -> Limit (\h -> [f(x + h) - f(x)]/h) 0, and then observe that Derivative (\x -> x[sup]y[/sup]) = \x -> yx[sup]y-1[/sup]). Historically, this has all been quite influential… but is the core of this innovative perspective a matter of notation or something else?

[Actually, rather than a backslash and an arrow, one generally uses a Greek letter lambda and perhaps a dot, or instead the English word “lambda” and parentheses, or an arrow with a vertical line on the left, or, long, long ago, a circumflex over a variable; most of these notational differences, I think we can agree, are of no importance whatsoever]

[I also want to say that **erislover** is exactly right to say that “abuse of notation” is where much of the interesting meat of notation leading to mathematical progress is to be found, even though my example was not of that sort (and though I think the Wikipedia page he links to isn’t very good)]

[And I also want to basically agree with Sage Rat’s “Code is code, but paradigms are what change the ballgame”: the question is how much paradigms are entangled with notations (which depends on what “notation” is taken to constitute)]

I don’t think that declaration is universally agreed even among the mathematicians. Or maybe it’s a semantics issue.

Imagine someone wants to measure the height of a tall tree (too tall to reach up to the top and measure directly). Imagine he has a piece of unmarked rope. While the sun is overhead, he could then mark his actual height on the rope (measure) and then mark the length of his shadow. He then makes a comparison of rope-length of his height to rope-length of his shadow. (There are no equally spaced interval numbers on the rope so he’d have to make this comparison geometrically — maybe by being clever and folding the rope over itself to determine the ratio.) He then marks the rope length of the tree’s shadow and is then able to determine the tree’s height. Is this type of thinking considered “mathematics” even though no symbols/notation were required? Many folks would say yes.

Sure, there’s also “notational” math that can also solve this… congruent triangles, trigonometry, etc. But it can also be reasoned out geometrically. I think what some people are saying is that the tree measuring method is “clever” but it is not “mathematics” until it is formalized into notation.

I disagree. I don’t believe this type of threshold for calling something “mathematics” is something we want to encourage because one of the problems kids have is being too comfortable with mathematics notation instead of being comfortable with true mathematical thinking.

To me, the tree measurement thought process is more “honest and pure” mathematics than a robot-kid that gets an A on an algebra test because he’s memorized the tedious notation-manipulation rules for polynomials.

I recognize that as you get into ever more complicated and advanced math topics, you leave the “physical world” behind and the notation itself (instead of the tree) becomes the object of thought and manipulation. A theoretical physicist working in 13 dimensions has a “symbolic” world as both his input and output. However, I didn’t emphasize this case because the original context for this thread was an algebra teacher and her students.
[Where the issue of semantics might come into my tree example is to propose that the unmarked rope is in-and-of-itself a type of “notation”. I suppose it’s possible to reinterpret a piece of rope that way but that would seem a warped way of equating “notation” with “math”. In this thread, I assume that most people define “notation” as some kind of written glyph on some medium (e.g. Greek letters on a papyrus, paper, chalkboard, Excel spreadsheet, etc.).]

To me, one of these two ideas is highly misleading, and I think it is the former notion, that one could have just written out CPU instructions by hand. Wittgenstein, for instance, was highly incredulous as to the underlying value of a 100+ page “proof” that 1+1=2. He considered that someone offering him a massive proof like that for a proposition would be more questionable, not less, and I am in nearly absolute agreement about that. I can trust a calculation like 125+17=142, and if I were testing someone’s knowledge of basic arithmetic I’d at least like to see them “carry the one”, but if Russell rose from his grave to produce an epic work proving such a thing from first principles it would be less illuminating to me.

Notation is not a shortcut; notation is how we do math. Principia Mathematica is a bedtime story.