Descriptivists: When does an "error" become "nonstandard usage" become "alternative"?

In part triggered by a thread on the pronunciation of “asterisk”, I have a question for the descriptivists in the group. I’ll use that thread as an example.

Once upon a time (possibly), everybody pronounced the word as spelled, as “asterisk.” Then one person came along and said “asterix” or “asterik.” Would a descriptivist say that person made an error, or created a nonstandard pronunciation? If it’s an error, how many people have to make the same error before it becomes a nonstandard pronunciation? How many before it becomes an alternative pronunciation?

The same question would apply to usage as well, such as the use of “comprise” to mean “compose” (especially the usage such as “is comprised of”) rather than the meaning of its Latin origin of “include.”

I realize there is no formal standard for English grammar, usage, or pronunciation, and I have no desire to establish the Language Police, but I do want to understand when does an error become no longer an error but something else.

There is of course no literal, numerical, or epistemological answer to this.

The rule of thumb I’ve often stated here is that a usage is standard when it is commonly adopted by good writers, good writers defined as the general class of working newspaper, magazine, book and other print writers that the public reads.

This therefore accounts for ain’t and irregardless being still considered non-standard, despite decades of occurrence. It allows new usages to move into the language with a fair rapidity while keeping most faddish slang out. Practices like apostrophes in plurals are seldom tolerated in good writing, but the contraction of two-word phrases to hyphenated to single word status moves apace.

What the status is of any individual word is therefore a matter of debate and conjecture. As the American Heritage usage panel has shown consistently for decades, this is the way it’s always been and will be for any foreseeable future.

Standard pronunciation is harder to gauge this way, but pronunciation is best left to the experts. As I said in that other thread, you (plural you) don’t know how you really pronounce words until you parse them from a recording. You may think you pronounce asterisk in the “correct” manner, but you have no idea how it or 10,000 other common words are being heard by listeners. The subtleties of variation fill many boring books.

" Ain’t" is a perfectly good dialectical word, has been around since before the USA was a nation, and is accepted as such by Oxford.

I read an interesting article once about how dictionary writers decide on new words, meanings, and spellings for inclusion. (sorry, it was a while ago and I wouldn’t be able to remember the source for the life of me) Pronunciation is another can of worms but you also mention usage.

The gist though was they read a lot, and they have to be able to find multiple instances across a fairly broad spectrum of media - it can’t be something you only see used within a confined group. So basically if it’s a word/usage/spelling that’s making its way into newspapers and novels as well as blogs and windshield flyers then it becomes “legit”.

No one is disputing that. What is being stated is that it is nonstandard in formal writing.

A true descriptivist would only call that an error is the listener could not make out what the speaker intended to convey. So even that first “asterix” would be considered an OK variant for colloquial speech.

To sum – descriptivists do not comment on matters of style. “Bad” style =/= miscommunication.

One possible objective standard is that a native speaker of a language is incapable of error, at least in the spoken form of the language.

Well, that’s too extreme. What if a native speaker immediately recognises the error and self-corrects?

That’s not strictly true, though I know what you are trying to say and I can’t for the life of me figure out another way to say it. Native speakers do make slips of the tongue and other mistakes, and they recognize them when they occur. But those are errors in speech, not grammar. A form occurring naturally on the lips of a native speaker, understood by other members of the speech community, cannot be an error no matter how deviant from the formal language.

As a side note, I came across this in the Scarlet Pimpernel (1905): “Her coachman, too, had been indefatigable; the promise of special and rich reward had no doubt helped to keep him up, and he had literally burned the ground beneath his mistress’ coach wheels.” I had no idea the use of “literally” to mean “figuratively” went back that far. [I would have said “mistress’s,” as well, but that might be the modern editor’s work.]

This is descriptivism in a nutshell. Well said.

Of course it did. Or rather, ‘literally’ has and always will mean ‘literally’, and always has and always will be used as a hyperbolic way to mean, ‘almost but not really.’

A side question: what do you call the people who neither want to document how native speakers speak nor maintain a status quo of how sophisticated writers ought to, but ask the question, ‘How should we formulate the language to make it better?’

Kind of like language designers. Because I’ve always found it ironic that both prescriptivists and descriptivists are really on about the same thing. It’s just the former are behind the latter by 50-200 years. But what do you call someone who actually analyzes the communicative merits of inserting an apostrophe before the plural s of acronyms or other specially-formed words?

Baroness Orczy was not a stickler for accuracy or realism. Isn’t it possible she meant literally? (This, I feel, illustrates a problem with taking descriptivism too far. Accept literally as a synonym for figuratively, and it winds up communicating nothing.)

Agreed. Descriptivism is most useful for the spoken language. For the written litterature, prescriptivism is more useful. That’s why we have an educational system and style guides and the whole ball of wax. Of course, there’s dialect literature (and dialectal moments in standard-language literature) just to keep it interesting, not to mention language change.

That happens a lot. My favorite (or most hated) example is the muddled meaning of will/may/might. A lot of times people will say ‘might’ when they should really say ‘will’ just to cover their ass or out of habit. It confuses everything. A real blight on our language (especially in technical contexts).

But it’s not a matter of grammar or word meaning, but of usage. Just like using ‘literally’ to mean ‘almost like’, it’s well within poetic license.

I have never understood there to be any difference in shall/will and may/might, so I have never been confused.

(will) > (probably will) > (may) > (might) > (might possibly)

(might possibly) > (thinking about it) > (we’ll see)

:slight_smile:

  1. This distinction between “may” and “might” makes no sense to me.

    a. Etymologically, it’s just a difference in grammatical tense
    b. There’s nothing about the words or their uses that suggests to me that it should be may>might instead of the other way around
    c. Your schema offers me way too much granularity to be useful in common speech. You might as well use numbers if you want to get this fine.

  2. “Might possibly” just sounds like a redundancy.

  3. If you must distinguish between “may” and “might” I would prefer this one:

    a. “May” refers solely to permission: I may go to the movies = I am permitted to go to the movies
    b. “Might” refers solely to possibility: I might go to the movies = There is a possibility that I will go to the movies

Maybe I’ve been mistaken, but I’ve always thought, “this may do that” was a realistic probability of, say, 5-20%, while “this might do that” was a more remote possibility of, eg, 1-5%. Not sure why, but that was always my impression of what the words meant. “Etymological” arguments aside.

I’ve been advocating this all along, and our discource would be infinitely better for it.