Logic vs language.

I don’t find this so trivial, however. In fact, I would say, because that is what it means, it has everything to do with it.

Lib and eris: You’re right, the two examples I gave are examples of semantic referents (even if they might be ambiguous, or contain non-verbal components, or whatever). I spose that once you’ve got a symbolic referent (or set of referents) for a concept that the semantics have already taken care of themselves. I was thinking that concepts that cannot be adequately described could be symbolic without being semantic (e.g. all of Lovecraft’s “horrors beyond human imagination”, or God (or more often God’s Plan) being described as “ineffable”), but really in these cases “cannot be adequately described” serves as the semantic referent.

Exactly.

And by the way, the best semantic reference I have heard for God is “necessary existence”.

A clarification:

I am not stating that language is the only means or the superior means to develop an epistemology. But that its acquisition and development, through childhood and societally, is a superior model of real world knowledge acquisition.

Logical systems are set up following from a particular set of axioms. True the axioms may themselves have been formed inductively, but for most logical systems induction is then eplicitly kept out of the equation. A logical system cannot be set-up in a vacuum. It cannot and does not bootstrap itself. It presumes a great amount of understanding about the world to begin with. (Eris has spent many a thread focused on this very aspect of epistemology.) Language, as acquired by infants into adulthood, and as developed by societies over eons, on the other hand does bootstrap itself. Its terms are not defined ahead of the game but defined by the process of playing the game.

Logic may be a particular language, a special case, but it is different from languages that evolve over time and that we learn natuarally as part of development. Likewise I except learning a new language as an adult from my conceptualization of language as a model of human real world epistemology.

Is it just me or is this a tautology?

But natural language has what we sometimes consider a flaw: ambiguity. This can be based on construction, or it can be based on an incomplete context, or on imprecise use of a word (or all of them!). In logic we try to remove that. This makes logical conclusions much clearer than they would otherwise be. Best not to think of it as logic being limited, rather than to think of logic as strengthening language in a place where it can sometimes be weak.

Logic has evolved over time, as well, though. It is not quite so static. Not only are there many different logics, there are probably as many squared interpretations of them.

Eris wrote:

That’s for certain. First order logic has a bazillion cousins. And not all systems are deductive either. Inductive systems, using techniques like inverse resolution, relative least general generalisations, and inverse implication, are quite popular now in computer programming. Even abductive systems have emerged.

But among deductive systems, the modal logics in particular are multiplying like rabbits. Here are just some, from Stanford:

[ul]
[li]The serial D system, where a necessary A implies a possible A[/li][li]The reflexive M system, where a necessary A implies an actual A[/li][li]The transitive 4 system, where a necessary A implies a necessarily necessary A[/li][li]The symmetric B system, where an actual A implies a necessarily possible A[/li][li]The Euclidean 5 system, where a possible A implies a necessarily possible A[/li][li]The unique CD system, where a possible A implies a necessary A[/li][li]The shift-reflexive Necessary M system, where a necessary A implying an actual A is a necessity[/li][li]The dense C4 system, where a necessarily necessary A implies a necessary A[/li][li]The convergent C system, where a possibly necessary A implies a necessarily possible A[/li][/ul]

These systems overlap across manifold dimensions. S5, for example, is the same as M5, M45, MB5, M4B5, M4B, D4B, D45B, and DB5. And new logics are introduced practically every day. It’s a hell of a zeitgeist for logicians.

It seems that every time we find ambiguity in logic, we make new ones to re-affirm the possible interpretations consistently.

erislover: Do you believe that the symbol “2” is inherently related to the number? Do you think that people all over the world, in all times and places, would recognize it as referring to this(**) many things? Do you think that the meaning of “2” is build into the very structure of space and time itself?

Presumably, no.

Words like “he”, “she”, and “it” are particularly flexible: they can be used to refer to many different things depending on context. However, the things they point to share certain things in common; the underlying concept would be expressed roughly in English as “thing with a particular gender that I am referencing”.

When students are taught about the repulsive and attractive forces between atoms, they’re often told that the atoms are like springs. This metaphor is just fine, but the problem is that springs act the way they do because of the forces between atoms. Are the forces between atoms like the forces between atoms? Are springs like springs? Without the sensation/structure/concept behind the words, the metaphor is meaningless.

But I don’t think these are the same questions.

Let A represent “The”
Let B represent “weather”
Let C represent “is”
Let D represent “[however the weather is whre you are, fine, cold, warm, whatever]”

Now, say “A B C D” and mean “The weather is fine” [or whatever]. Personally? I can’t do it. I can’t do it because the symbols I use to mean things are entirely connected to what I mean by their use (and I don’t mean this tautologically).

This doesn’t mean that the exact way they are represented is therefore not arbitrary. ‘Twoness’ needn’t be represented by any symbol. But if we are to use the meaning, it stands in need of a symbol that we all agree on. This makes the symbol quite important, wouldn’t you say?

Sure. That’s what meaning is all about. But without the symbol, what do you mean?

[a storm of mingled harmonies]

“A step into darkness.”

[fades to silence]

Ah, but you see, what you consider the flaw of ambiguity, I consider critical contextual modification for modelling knowledge acquisition.

You see you all keep trying to defend logical systems as superior epistemologic systems, which is a debatable point in its own right, but is not relevant to the position put forth.

How do we as individuals and as societies actually form knowledge?

Does an individual, or a society, begin with a set of rules (whether it is the shift-reflexive Necessary M system, where a necessary A implying an actual A is a necessity, or the dense C4 system, where a necessarily necessary A implies a necessary A, or the convergent C system, where a possibly necessary A implies a necessarily possible A)? Or do individuals form the rules as they form the knowledge contemporaneously?

Let us use the acquisition of scientific knowledge as a case study. Understanding context modulation is key. Concepts morph with usage and with new experiences. One metaphor suggests another. The questions create the tools and the available tools suggest the questions. The scientific method was not begun and thence scienctific knowledge emergent from it. The two developed out of each other; the system for forming the knowledge was created simultaneously with the process of acquiring the knowledge, and both the system and its contents matured simultaneously, creating each other as they went.

Language acquisition and development models such a process; to the best of my (limited) knowledge no logic system does.

DSeid wrote:

Language is fuzzy because the brain is fuzzy.

Memories, for example, are not stored like video-taped documentation, but rather like flashes of recognition here and there, some accurate and some altered. Perceptions are not etched in stone; they are malleable. Comprehension is self-referential and subjective, not absolute and objective.

Attempts to fuzzify logic have highlighted an interesting property of language, I think. From a Fuzzy Logic FAQ: “What’s crucial to realize is that fuzzy logic is a logic OF fuzziness, not a logic which is ITSELF fuzzy.”

But modelling the acquisition of knowledge doesn’t give you knowledge, it gives you a model of acquisition and begs the question of what knowledge is in the first place. Epistemology needs to say quite a few things:

  1. What is knowledge?
  2. Is knowledge, as defined above, possible?
  3. How do we come to know something if so? If it isn’t possible, why does it seem like it is?

Simply looking at language will tell us a lot. It will tell us, for example, how we are inlined to use the words, “know,” “believe,” “suspect,” and other knowledge-words. But why would you suspect that this will give us knowledge (apart from, perhaps, knowledge of how the words work)? Now, there is much we can learn from a model, provided we have already answered those three questions above, and developed some ideas of what models are good for.

So open up your own dang thread! :wink:
I am particularly interested in HOW we actually get knowledge. And I, being out of the sciences, see models and case studies as valuable tools for exploring how things really work (as opposed to how they theoretically should work). So just as the development of the visual system models the development of other sensory-cortical systems that are less easily directly studied, so does the less intractable study of how languages develop model the real world formation of knowledge in general.

Lib,

Exactly right.

And because the human brain is fuzzy, real world human knowledge is fuzzy. Fuzzy logic is one attempt to replicate that, but it is, IMHO, an inadequate one.

All I have so far is your assurance that this is the case. You’ll have to pardon me if I remain unconvinced.

As am I. I am also particularly interested in conversational language. But I am not sure that this interest automatically guarantees that this is the way we should approach it.

A model says how things theoretically work. Even if your model is natural language. A big part of it being a model is treating it as representative, in a partially abstracted case, of the thing we desire to understand. It is a matter of interpretation in any case. To ensure consistency, it is most common to remove as much ambiguity as possible from a model. This way it does what it is supposed to all the time. To achieve this we end up with things like logic. But we also end up with things like jargon. Philosophers, engineers, chemists, mathematicians, and so on all have terms in their natural language which, in the appropriate context, lose as much ambiguity as possible.

That’s nice. Do you know this or are you asserting it?

Some of the questions I have are:

  1. Why is natural language a model (at all) for epistemology?
  2. What does natural language say knowledge is?
  3. When word use shifts over time, are we assured that this shift is more suited to knowledge acquisition? Or can we in fact say anything at all about it?
  4. What makes you think anyone uses logic as a model for knowledge acquisition, rather than using it (among other things) as a method of deciding, in some cases, what is and is not knowledge?

Sorry, 2 should read, “What does ‘natural-language-as-model’ say knowledge is?”

Well, I made the proposition, so it is my job to defend it.

Perhaps the clearest way to express it would be that I put forth the case study of language acquistion as a model case for knowledge acquisition in real world contexts in general.

So to take on your questions:

  1. Why is natural language a model (at all) for epistemology?

It is an example of natural knowledge acquisition that can be studied at both evolutionary (ie societal over eons) and ontogenic (developing within an individual over the course of that individual’s development from infancy into adulthood) levels. Language development has been well studied already at both these levels and therefore presents a large data base from which to abstract features of natural human knowledge acquisition in general. This therfore is a data-based approach to epistemology rather than theorizing alone.

  1. What does “natural-language-as-a-model” say knowledge is?

Knowledge, in the sense of natural language, is the simultanous formation of a set of symbols that have useful correlation with some shared external reality, or at least some predictably recurring perception of an external reality, and of the rules/tools by which and in which those symbols function.

  1. When word use shifts over time, are we assured that this shift is more suited to knowledge acquisition? Or can we in fact say anything at all about it?

Hard to answer this without resorting to tautology. Best to answer: No. We are not assured that word and language rule shifts will best predict new recurring events. It is an attempt by the system to best predict, to adapt, to changing realities, which change partly as a result of the system itself. It is trying out new variants and determining what works best. Just like in evolution, many attempts will fail, but those that persist are those that best met the needs of predicting those recurring features (of the options available at the time.)

  1. What makes you think anyone uses logic as a model for knowledge acquisition, rather than using it (among other things) as a method of deciding, in some cases, what is and is not knowledge?

Oh just the tone of the various meaningless threads. If not, then fine. But my personal interest in epistemology is not much that of what various logic systems define as knowledge, but how the human mind and society as a whole actually does it. And how best to model such was the question asked. If you care to propose a superior model for real world knowledge acquisition then please do so and tell why.

I sort of agree, with a thousand and one caveats. :slight_smile:

And I have no qualm with data-derived theories. The sticky part is, without a small epistemology to begin with, we have no way to determine the validity of the data. Do you see what sort of assumptions creep up on us here? Or that we need a way to assess knowledge before we can get down to the nitty gritty?

I agree. But what disturbs me most in considering natural-language-as-model, is that we often have concepts and words that we use quite easily that we only later find out were used incorrectly, and in fact it wasn’t the language that told us this, but data interpreted in a theory.

I do agree that investigating how we learn language has applications far outside just learning language. It could show us how to learn all sorts of things, or at least how we do learn them. But the determination of what knowledge is in the first place, I am not confident an analysis of language-acquisition will help us get there.

If the question is: how do we learn? then I agree that language offers us a great place to start. But I don’t think anyone offers up logic as a model for how we learn.

Perhaps that is my only point.

Okay. So from your perspective, what is the difference between “learning” and “acquiring knowledge”?

Do you accept my proffered definition of knowledge in the context of natural language and, if so, do you see it as having applicability to other forms of knowledge?

I’d like to emphasize that the largest reason for thinking of this as a model is the fact that the system and that which the system contains are created together and co-evolve. It reduces the significance of your so-called God Postulate.