There appears to be an inevitable tradeoff between information and difficulty - symbols that encapsulate a lot of information are difficult to construct, or if simple to make, there must be a large variety of them, making them difficult to learn instead.
Richard Turner is a blind card mechanic. A card mechanic has the ability to manipulate playing cards in front of observers without detection. They can cheat at cards, or in Turner’s case no one will play cards with him because he is so good at it. He demonstrates the Daredevil principle that removing the sense of sight results in enhanced physical abilities to compensate. Unlike DD he doesn’t practice parkour at an advanced level, he can just make any playing card appear on top of a deck or in someone’s hand at any time. In his one man show Dealt he demonstrates the ability to deal any hand of cards on demand no matter how many times a deck is shuffled. Shuffled by himself or any other person. He can read cards with his fingers somehow, and keep track of their order in a deck in remarkable ways. As far as I can tell no one on earth can explain or duplicate his ability or catch his incredible card handling.
Well… before I retired, one of the things I would routinely do was administer (not perform!) translations of medical product information in all official EU languages in what is termed “the centralized procedure” - essentially a system of Pan-EU licences for medicines. The way the process worked (and I assume still works) is that the original approved text was English; which then had to be translated into every official language. Here’s an example I just chose at random: Talvey.
A rough measure of how long the text is in each language is, I guess, how many KB the PDF of the product information is.
Shortest: English (EN) (317.48 KB - PDF)
Longest: Malti (MT) (394.69 KB - PDF)
And BTW
español (ES) (327.28 KB - PDF) - one of the shorter ones, though obviously I have no idea how good a measure of information per syllable this is.
Scroll down to the big header Product information and click on the link Other languages (24) for the full set. None are pictographic (sorry, I’m not a linguist) but Bulgarian and Greek use different alphabets. Offhand I don’t know of any publicly available easy comparison like this which includes Japanese etc etc.
No doubt you could perform word counts if you want a more sophisticated measure.
Interesting. I was just hazarding a guess from comparing text samples I happen to have seen (instruction booklets, medical labels etc).
I guessed the variance to be a factor of 2 at most, but actually it seems to be quite a bit less.
It does appear that all common languages are fairly close to being similarly efficient in this respect?
Well, this sample is (necessarily) restricted to official languages of the EU. Within that group there is some variation (linguistically, I mean), though, and they’re not all closely related. Hungarian, Finnish, Estonian and Maltese are not Indo-European, Maltese being the real outlier. I did a quick and very unscientific poke around a few other products, and Maltese does seem to be consistently one of the longer documents. But given the sample of languages, yes, I was always struck by how similar all the different language versions were in length.
I would say that’s not a great metric, because English text is almost exclusively ASCII, which uses one byte per character, while other languages using non-ASCII encoding will have at least 2, possibly 3 or 4 bytes per character (assuming UTF-8 encoding, which is pretty much universal now). A file containing a piece of text in Greek, for example, will be about twice as long as a file containing a piece of text in English using the same number of characters. PDFs add another layer of complexity because they are compressed, so the size of the file does not have a close correspondence to the number of characters in it.
I think the file sizes are so close because of the PDF compression. The size of a text file after running through a perfect compression algorithm would depend on the information content of the text rather than the encoding. (However I don’t know what compression algorithm PDF uses.) If you extracted the text into .TXT files or something like that, the Greek would be much longer than the English.
For example, here’s a piece of mostly Greek text:
Υπάρχουν πολλές εκδοχές των αποσπασμάτων του διαθέσιμες, αλλά η πλειοψηφία τους έχει δεχθεί κάποιας μορφής αλλοιώσεις, με ενσωματωμένους αστεεισμούς, ή τυχαίες λέξεις που δεν γίνονται καν πιστευτές. Εάν πρόκειται να χρησιμοποιήσετε ένα κομμάτι του, πρέπει να είστε βέβαιοι πως δεν βρίσκεται κάτι προσβλητικό κρυμμένο μέσα στο κείμενο. Όλες οι γεννήτριες στο διαδίκτυο τείνουν να επαναλαμβάνουν προκαθορισμένα κομμάτια του κατά απαίτηση, καθιστώνας την παρούσα γεννήτρια την πρώτη πραγματική γεννήτρια στο διαδίκτυο. Χρησιμοποιεί ένα λεξικό με πάνω από 200 λατινικές λέξεις, συνδυασμένες με ένα εύχρηστο μοντέλο σύνταξης προτάσεων, ώστε να παράγει που δείχνει λογικό. Από εκεί και πέρα, το παραμένει πάντα ανοιχτό σε επαναλήψεις, ενσωμάτωση χιούμορ, μη κατανοητές λέξεις κλπ.
It contains 774 characters. Stored as a text file, it is 1424 bytes. In contrast, a piece of English (ASCII) text that is 774 characters long would be 774 bytes.
I would say I know a lot about UTF-8 encoding but only a little about PDF files. As far as I can see, Adobe says you cannot directly count the characters in a PDF file. They recommend converting the PDF to a MS Word file and use Word’s “word count” feature, which is a pretty convoluted way of doing it IMHO.
“Natural short sleepers” they are called, or just “short sleepers”. They have a genetic mutation that makes them need less sleep; they aren’t sleep deprived and suffer non of the negative effects of less sleep. There’s a number of other positive effects as well.
Individuals with this trait are known for having the life-long ability of being able to sleep for a lesser amount of time than average people, usually 4 to 6 hours (less than the average sleeptime of 8 hours) each night while waking up feeling relatively well-rested, they also have a notable absence of any sort of consequence that derives from depriving oneself of sleep, something an average person would not be able to do on the sleeptime (and the frequency of said sleeptime) that is common for people with FNSS.[10][11][12][13][14]
Another common trait among people with familial natural short sleep is an increased ability at recalling memories.[15] Other common traits include outgoing personality, high productiveness, lower body mass index than average (possibly due to faster metabolism), higher resilience and heightened pain tolerance.[15][16][17][12][18][19] All of these traits are of slightly better quality in people with natural short sleep than in people with natural normal sleep, essentially making them slightly more efficient than average people.
Since it’s genetic if the history of human reproduction had been a bit different it might well have become the norm, and we’d all be just that much better off.