Is good math to a large degree about good notation? Hm… I dunno. I don’t really feel like making pronouncements either way. But it seems like perhaps different participants in this thread have different ideas about what constitutes notation. So perhaps it’s helpful to the discussion to bring that out more explicitly.
For example, let’s look at the swath of variations possible in notation. It’s not just the trivial matter of selecting letters and accent marks to use to denote various operation, is it? Something like choosing to carry out a geometric proof with diagrammatic reasoning rather than words or laborious manipulation of corresponding serialized formulae is, after all, in some sense a notational choice, yet perhaps a very important one; consider also diagram-chasing arguments in category theory. But it’s not always so clear-cut what counts as notation. Does ITR’s examples of representing certain groups as edge-labelled graphs count as notation or is this some kind of pre-notational mathematical transformation? As a less obscure example in a loosely similar vein, is representing linear operations as matrices of real numbers a matter of mere notation or is there more going on here?
If “notation” means only “Once you already have the idea, how do you put it down on paper?”, then, by definition, notation is but an afterthought. But that perhaps is not the most apt definition of notation.
As for the OP, since it’s his hypothesis, perhaps he would care to put forth some more examples of notational breakthroughs? (I don’t think he would have spontaneously offered up Newton’s dot notation as an example of what he’s talking about)
Here’s one small example I can think of, and we can debate where concepts/ideas end and where notation begins (and to what extent there is overlap and feedback), which is, after all, the whole question: in the notation of the lambda calculus, one often writes functions anonymously: rather than first giving a definition “f(x) = x^2 + 3x + 7” and then using the name “f” later, one can simply write “\x -> x^2 + 3x + 7” to denote that same function. It’s a small thing, but this not having to cumbersomely use names to refer to every function one decides to talk about makes it much easier to manipulate functions in a higher-order manner; in a way, it reduces the psychological hurdle to passing functions as arguments to other functions, having functions return new functions, etc., while at the same time making clear which variables are bound in which way where (as an example of all this, we might give the definition Derivative(f) = \x -> Limit (\h -> [f(x + h) - f(x)]/h) 0, and then observe that Derivative (\x -> x[sup]y[/sup]) = \x -> yx[sup]y-1[/sup]). Historically, this has all been quite influential… but is the core of this innovative perspective a matter of notation or something else?
[Actually, rather than a backslash and an arrow, one generally uses a Greek letter lambda and perhaps a dot, or instead the English word “lambda” and parentheses, or an arrow with a vertical line on the left, or, long, long ago, a circumflex over a variable; most of these notational differences, I think we can agree, are of no importance whatsoever]
[I also want to say that **erislover** is exactly right to say that “abuse of notation” is where much of the interesting meat of notation leading to mathematical progress is to be found, even though my example was not of that sort (and though I think the Wikipedia page he links to isn’t very good)]