Do some languages process information in a more sophisticated manner than others?

I have heard that English has more words for technology terms and other languages just use our words and not make their own

His point here (and it’s really a very minor point in the book as a whole) is that in Asian languages numbers consist of fewer syllables more logically arranged. Fewer syllables means that long lists of numbers are more easily memorized. (It’s been pretty well established that it’s easier to remember lists of words with few syllables that lists of words with several syllables.) More logically arranged meaning that typically they don’t switch to a different numbering scheme after ten, as in “twelve, thirteen, etc.” as opposed to “ten-two”, “ten-three”, or “forty” instead of “four tens”.

The result of all this is (he says) is that it’s easier for children who speak these languages to learn to count at an earlier age, and to get an innate number sense earlier as well.

I don’t speak any Asian languages myself, so I can’t say how true any of this is.

Ed

Bah. I have difficulty taking “ability to memorize large quantities of numbers” as particularly useful to mathematics (even in the sense of standardized tests), but if it were a task we cared about (because, say, we wanted to use gross rote memorization as an educational tool, rather than some other technique), and if “logical/consistent/systematic” representation were key, surely any English speaker could just think of numbers as strings of digits, and not worry about saying “twelve” instead of “one-two” (or whatever).

It seems to me on the level of arguing “English speakers are disadvantaged because they use the special words ‘half’ and ‘second’ instead of ‘twoth’”. I am extremely skeptical that it is of any significance.

I seem to recall a Scientific American article of several years ago that maintained that Japanese language structure results in differences in thinking processes.

I’m not to a value of “more sophisticated manner”, just a difference in processing.

Unfortunately I donated my SA library to the school library when I retired.

Anyone able to help find the article?

I just got through reading a book called “Mathsemantics” which was fun, if a bit witness-ey, and sometimes factually incorrect. The author made an interesting case about various facets of the English language getting in the way of mathematical ability, and that other grammars might do worse or better. It wasn’t a remarkably sophisticated argument but I thought it was worth considering. I’ve known plenty of people who were “good at math” but had a hard time working with actual math in terms of being able to properly abstract away unnecessary elements in order to solve basic word problems. The whole thrust of the book was being able to do this, and he had some amazing empirical evidence on his side.

This has little bearing on the OP, though, as the problem wasn’t a lack of information, but an emphasis on, er, intrinsicness in language.

In Chinese, the number 22 is written

(character for 2) (character for 10) (character for 2): 二十二

In English, it is written

(character for 2) (character for 2): 22

The former has obvious advantages for conceptualizing the decimal system. Once you understand the decimal system thoroughly, though, I’d think the English system is advantageous. In other words, neither is inherently better, but each has a distinct advantage over the other.

It looks to me as if not a single one of the authors cited as writing that some languages have advantages are actual linguistics professors. I don’t think that’s in the least bit a coincidence.

Yes, some languages are more sophisticated. German has a whole system of nesting concepts that goes beyond English’s clauses and parentheses. Their penchant for making words up on the spot is likewise commendable. Languages like Latin and Russian forgoe word order and use a sophisticated system of suffixes to label parts of speech. This allows greater flexibility of expression and more complex sentences…

Yet it’s a whole different argument whether that amounts to anything. After all, the human mind, and its imbicility at understand the most basic sentences, is universal. (Btw, are you still following me?) In fact, a great argument could be made that a simple language like English forces the writer to sharpen his argument and produce more effective communication. (Not that I’ve succeeded in keeping my post simple and concise.)

But, if advanced syntax found in German and Russian was used only rarely, only when most appropriate, and if the temptation to abuse it was resisted, then maybe they’d have inarguable claim to being more effective languages. (In fact, if this whole post had used less-common vocabulary more sparingly, it would have also been more effective. Big words, like fancy grammar, are potentially more expressive, but detrimental when used in excess. (Goddammit))

The same goes for programming languages, too.

Out of curiosity, how would a number with more digits be written?

Indeed, the meme is that They’re All The Same.

I’m surprised this viewpoint hasn’t been voiced more in this thread, but much of it is politically-correct wishful thinking.

Much of it. Not all. Certainly, evolution of languages gets them to gravitate toward the same level of complexity. Given natural language evolution. Yet evolution of languages is often not natural. Snobs come into the picture quite often! Such was the case with what we call Latin, which wasn’t actually used by anyone except snobs. (Normal people used “Vulgar Latin,” which was rather simpler.) Chinese too has been the victim of snobbery. Their thousands upon thousands of characters is testament to that (although the vast majority aren’t commonly used). Another interesting facet of Chinese is its monosyllabism. Its words used to be polysyllabic. Then some real cool guys came along and expressed themselves in weak puffs of intonation. To the point where the vocabulary contracted into a sea of homophones. A thousand words for an entire language. But with time polysyllabism has reentered the picture, and most new words are polysyllabic.

The one change that would make English and every other language process information in a more sophisticated manner is a deep, systematic integration of the concept of degree. (Like “good” vs “plus good” vs “double-plus good” in 1985, but less wordy.)

Think, how often does debate devolve into “is” or “isn’t,” when people obviously should be talking about “is a lot” or “is a little.” This fundamental, universal flaw in our discource is down to our damn language! Sure, we have synonyms that try to convey this information, but they do a shitty job. Even “is a lot” and “is a little” is still binary, and hardly expresses shades of gray!

In the future, we may adopt the use of ~10 suffixes to mark an adjective’s intensity. This advance will be profound.

“is a lot” vs. “is a little” is too discrete, but “double-plus good” vs. “plus good” is suitably continuous?

Nothing in the language is straitjacketing people away from debating degree, if they decide to do so.

It’s as much PC to report that the consensus of all linguistic professionals is that no such thing as a sophisticated language exists as it is Marxist to report that physicists think that physics works the same way everywhere in the universe.

And certainly no professional would say anything like Latin and Russian have “a sophisticated system of suffixes to label parts of speech.” Dozens of languages have far more complex systems. Complexity has nothing to do with sophistication. Snobbery works in both directions.

Can somebody email this question to a professor of linguistics so that the professor or a colleague could show us what the considerations are?

Well, it’s no great secret that some languages are generally more concise than others, so there’s at least a difference in information content, though I’m not sure if that’s within the label of ‘sophistication’ the OP posits; though a simple argument would be that the ability to handle the same amount of information within a smaller space is inherently more efficient.

A useful tool to quantify information content in a language is Shannon’s concept of entropy: in physics, entropy measures the disorder, or randomness, of a given system – a solid block of ice, consisting (largely) of molecules sitting rather neatly at their positions in a crystal lattice has a lower entropy than that same amount of water turned into steam, where the molecules just kinda zip around randomly and recklessly. In a similar sense, a string of random letters has a higher entropy than a nice and orderly sentence – as a measure of this, you can use the predictability of each letter. In a random string, each subsequent letter is pretty much unpredictable, while in any word, some letters are more likely than others: if you see a ‘q’, you’ll expect the next letter, with good certainty, to be a ‘u’. This predictability is codified in the probability distribution of the letters, which can in principle range from dead certainty – in a string consisting of all 'a’s, the next letter will always be an ‘a’ – to complete ignorance, as is the case for all letters being equiprobable – which are the minimal and maximal cases for the entropy, respectively.

However, this relates to information content in an at first rather counterintuitive way: the higher the entropy, the higher the information content. That means, in effect, that a string of random letters has a higher information content than an ordered sentence of the same length, even though only the latter can convey any meaning. That can’t be right, can it?

Well, let’s look at that string of all 'a’s again: none of these 'a’s can convey any new information, since it’s exactly determined – the next letter is always an ‘a’, no surprise there. The whole thing could be written as ‘na’, where n is the number of 'a’s the string consists of. It is therefore highly compressible; in contrast, the random string can’t be compressed at all, since there are no predictions you can make about which letter goes where. And that notion of compressibility is again somewhat more intuitively accessible: the smaller we can make a message without losing information, the less its information content must have been in the first place.

And there’s how one can compare information content across languages: compress a text of fixed length maximally, and see how long the result ends up being; this gives you a measure for the entropy, and thus the information content, of the language; if you want to, you can then derive an average ‘bits per word’ number, and use that to compare languages with respect to their conciseness.

So well, that was perhaps a bit long and rambling towards no really good ends other than saying ‘some languages are more concise than others’, and I’ve probably said a good few things most people already knew, but still, it’s an example for languages dealing with information in different ways.