Computer Languages=Communication or Instructions?

Last night I was visiting some friends and got into a very interesting discussion with them. Somehow we got onto the subject of language and language development. My friend (who is into computers and programming and therefore a nerd of a much higher order than myself) made the case that computer languages are equivalent (in the communication sense) to other languages such as English, French, Spanish, etc. He said that the computer languages are functionally equivalent to English and other languages because it fulfills the same task of communicating an idea.

I disagreed, saying that computer languages are really just an elaborate set of instructions, and nothing more. You couldn’t translate the Bible into C++ for instance, nor could you use it to express an abstract idea.

But he countered by saying that you could use a computer programming language to construct a program that would communicate an abstract idea, without necessarily using any words. So communication would take place, although indirectly.

I’ve decided to take this question to the SDMB (which he had never heard of, can you imagine such a deprived life?). I know there are computer programmers out there reading this, and I hope that there are a few linguists as well. Care to chime in on this question? Can computer programming languages be considered functionally equivalent to English, etc.?

I’ll be sending him the URL link to this thread so he’ll be able to see the answers, too.

I’m with you on this one up to a point. Using a computer to spit out the text of a document in no way uses the programming launguage to convey the concepts in the documents native language.

That said a computer language is a way to communicate concepts you can understand into those your computer can. Few people learn the real nuts and bolts of what goes on inside a processor: logic gates, flip-flops, etc. The language the computer truly understands is all binary, machine language. Quite honestly the only time I’ve used anything approaching this is on a “computer” trainer that was less powerful than a four function calculator. Paper tapes with binary instructions corresponded to stored instructions to move bits around to memory registers and combine them with various gate arrays to do arithmetic.

The next step up is assembler which puts english like words to the true machine operations, not machine language but not far from it. C is a bit further removed then Basic even more so. Higher level languages take very english like commands and turn them into broad sets of instructions for the machine to execute.

That said the limitation of a language is mainly those who can understand it. The Gettysburg Address makes sense to us but the last time I checked my CPU had no register that corresponded to “our forefathers.”

Computer languages are to human languages as complex machines (computers, for example) are to living organisms. You can draw lots of parallels, analogies, etc. between the two, but in the end human languages is orders of magnitude more complex than computer languages. My Kernighan and Ritchie gives a complete grammar of the C language in an appendix. Make one for English, or any human language, and you’ll probably get a Nobel prize :slight_smile:

Arjuna34

Generally, computer languages are imperative - that is, they are sets of instructions for manipulating a model of the world understood by the programmer. How abstract that model is, and the sort of concepts it encompasses may differ with the language, as well as with the sort of other software the programmer is making use of.

You should remember, that what a lot of programmers do is provide building blocks for other developers, encapsulating some useful level of higher abstraction. For instance, you may develop a UI framework so that UI developers won’t have to worry about the details of drawing the widgets, only say “I want to put a listbox HERE”. They may use your abstraction by writing something themselves, either in the same language you used by making use of a library you provided, or in some totally higher level abstraction that you designed for them. Or they may simply interact with a tool you provided, thinking solely in those terms, unaware of what is going on “underneath the covers”.

Many of these upper-level abstractions become sorts of quasi-languages in their own right (these days, many of them are becoming XML based abstractions, for people aware of the industry trends), but they usually maintain an imperative view of the world, allowing a human to manipulate the entities in a particular abstraction.

I would say that computer “language” use only breaks away from being imperative if the particular abstraction you are manipulating is attempting to model this type of interaction, although, ultimately everything is going to becaome a set of instructions. For instance, maybe simulation languages, in which you simply define the characteristics of the system to be simulated. The layout language (HTML) used to construct this page is “declarative” in that it simply defines the structure of the page (leaving javascript and such things aside for now). But those are still pretty much an imperative set of instructions to the browser or simulation engine to do something - namely render that structure on your screen, or run the simulation model and tell you what happened.

The layout for this screen wasn’t direct, either. The person who wrote it wrote php, that is, HTML templating with some other constructs allowing the programmer to specify how dynamic data (like the text of articles) was to be inserted into the page.

Programming these days often involves many levels of abstraction, and many levels of machine generation of stuff which you might think of as “code”.

It might be more interesting to turn the question around and consider loglan or lojban - this is a “natural language” which is logically constructed enough to allow automated parsing. Or various attempts to support natural language dialog.

It’s not only complexity - it’s context sensitivity. Human languages have this horrible habit of requiring actual knowledge of the world to disambiguate the grammar. Consider these two sentences:

One should never ride a motorcycle without a helmet.

One should never drive a car with faulty brakes.

The structure of those two sentences is damn near identical, with absolutely nothing that tells you what the “with[out]” clauses apply to without knowing the nature of the things referenced. You KNOW that YOU wear the helmet, not the motorcycle, and that it’s the car that has brakes, not YOU. The ambiguity probably never entered your head because you are unconciously aware of the natural relationships of the objects represented. Sometime, pick up a newspaper and start reading it conciously thinking about the automatic disambiguations of this sort you are making.

Prolog is a well-known non-imperative language… I’d post examples, but I seem to have lost all my Prolog files.

Regarding the OP… computer languages can certainly be used as communication between programmers. It’s often much easier to describe an algorithm to someone with a block of code or pseudocode than with several paragraphs of text, especially in the context of “how would I implement this algorithm in my program?”

This (IMHO) answers the question of whether something like DeCSS counts as free speech - it’s not just a tool to decrypt movies, it’s a concise description of how the process works.

This is actually a hotly-debated legal issue, specificaly whether first amendment rights apply to source code. I think this thread might be better suited to Great Debates.

Computer languages are not generally as expressive as natural ones (however have a look at some Perl poetry; you can also write perl in Latin), but that doesn’t mean that they are only useful in communicating instructions to computers. Bertrand Russell once said of mathematics:

Computer languages serve much the same purpose to computer scientists–many abstract ideas in computer science can only be described as code, and quite a lot of code is written for consumption by humans only.

An amicus brief filed for the MPAA vs. 2600 case gives a more eloquent argument than I can. It’s kind of long, but very well written and worth a read.

Here is an interesting story related to this. DISCLAIMER: This story came to me second hand, although through a reliable source, and I’ve probably forgotten many of the details. So be forwarned if you are ever inclined to repeat it.

During the 80’s the Computer Science Department at The University of Texas was being swamped with students. It seemed that everyone wanted to get in on the new paradigm. The CS department became flush with the power, money and status that came from all the students begging to get in, so they started ratching up their requirements. They began to make the CS students take more and more CS courses until the administration finally put an end to it by saying that a department could only require X number of courses from its own curriculum - or maybe that rule was already in place and the CS department just finally hit the limit.

Either way they weren’t content to let it go at that, so they cut a deal with the Linguistics Department - a department that historically had to beg students to even pay them a visit. The CS department got the Linguistics Department to carry a course called “Computer Languages” which then was added the the CS list of required courses. Of course CS instructors taught the course, but Linguistics got the official credit.

I don’t know what ever became of the deal, but it does seem to fit the idea that computer languages and human languages have at least something in common.

Oops. My apologies. I was being a bit loose in my use of “imperative”, thinking in an everyday linguistic sense, rather than a formal CS sense. As Mr2001 correctly notes, there are a number of non-imperative computer languages. Lisp is another popular example. To the casual observer, a lisp program would STILL seem like a way of handing out instructions, though (just through functional notation rather than statements). As opposed to, say, a reading of Thomas Pynchon’s “Gravity’s Rainbow”.

Communication between programmers is a pretty narrow scope. In that sense, any organized symbol system can be seen the same way - a score passed between musicians, a proof passed between mathematicians, chemical equations passed between chemists. I wouldn’t view musical staff as “functionally equivalent” to a natural language, though.

I think it’s a matter of scope - natural languages are intended to be applicable to all phases of human existence, and are notoriously noisy, imprecise and hard to categorize (as I just illustrated with my misuse). Computer languages operate on a well-defined subspace with precision, as do some other specialized symbol systems like chemical equations.

At the end of the day, I think this will come down to a matter of semantics.

One thing that all “natural” human languages have is a class of words (or word forms) known as deictics, the meaning of which shifts depending on who is speaking and the location of the speaker in time, space, etc. Words like “I,” “now,” and “here” are simple examples. Words like this are essential to how human beings communicate, for they allow us to tie all the other words that we use (like “red,” “apple,” “smash”) to specific contexts and situations.

Do any computer languages have something equivalent to deictics? If not, then I wouldn’t say that computer languages are “functionally equivalent” to human languages.

Java has the “this” pointer (er, whatever it’s called) that always refers to the chunk of code that uses it.

Visual Basic forms have the “Me” property which is pretty much the same thing.

(I’m certainly no expert; I’m sure someone else can give you a more detailed explanation.)

My vote: computer “languages” aren’t real languages in the sense that seems implied, like English or Latin. There are enough similarities that we understandably use “language” to refer to ALGOL or FORTH - especially considering that there aren’t other words that come any closer than “language” to describing these things.
The idea that these things accomplish communication isn’t enough to make them languages in any strict formal sense. Moorse code, telephone keypad codes to remotely listen to answering machines, and the spinning of a combination lock dial also accomplish communication, don’t they?
The customary tongues of nationalities of people are just about the only things that deserve to be called languages. There are a couple of others, perhaps - sign language, with its own distinctive grammar etc, no doubt counts. But computer languages are really closer to certain other formal constructions, like mathematical notation or engineering drawing, of much narrower scope.

OK. I’m a linguistician now working as a computer programmer, degrees in both, know a little bit about the subject…

No, computer languages are not the same as human languages. Irrespective of the technical meaning of “imperative” within computing terminology, all computer languages are imperative in the linguistic sense. What’s more, they are all performative imperatives - that is, the act of saying something in a computer language actually makes things happen. You say “x equals y” in English, and you invite responses like “No it doesn’t” or “What’s y then?”; you say “x:=y;” in Pascal and x is then made equal to y - and no bloody arguing about it. This holds for higher level instructions too, because, at the end of the day, every part of a legal utterance in a computer language is an instruction to the CPU to shift a few electrons around. This is not the case in natural language.

Having said that, utterances in a computer language can be used to convey additional information to human beings - but this depends on the interpretive abilities of the human being reading the program. That is, I can create a chunk of text which is syntactically a legal computer program - no, really, I can, on a good day at least - but which also conveys some meaning to a human being who might read the listing. But this does not mean that the computer running the program and the human being reading the listing are understanding the program in the same way.

To take an example: an introduction to a VDM textbook I once read (I lead such a varied and exciting life) talked about the meaning in Modula2 of the phrase “wet(windows)”. This is legal Modula2 (providing “wet” and “windows” are correctly defined) and it looks as though it means something in English as well - but what (if anything) it means in Modula2 will depend on any number of different factors. It might well mean, say, “42”, in the operational semantics sense (if “windows” equals six, and “wet” is a monadic function defined as “multiply by nine” :slight_smile: ). What it probably doesn’t mean is anything to do with panes of glass out in the rain.

Human beings are different from computers. Human linguistic processing is very different from the evaluation of commands in a programming language. My HO is that it would take a pretty substantial technical revolution for that to change.

You’re running that on an early pentium with the multiplication bug, aren’t you…

Well it depends on who (or what) you’re communicating with.

You can’t communicate with a computer. It’s a tool. Just like you can’t communicate with a hammer (or at least most people think so). A computer doesn’t understand anything, merely does what its instructions tell it to. The computer ALWAYS does exactly what the instructions tell it (much to the dismay of many beginner programmers who blame the machine when their program doesn’t work right). It doesn’t say to itself “well obviously he doesn’t mean to access that memory since he already freed it” and subsequently skip that instruction.

If you’re using c++ or any other “computer language” to communicate an idea to a human, then it could work, and be considered communication. I doubt you could communicate with a human using “machine language” though, which is ultimately how you instruct the computer:

11111111111000001101011001000010 (Doesn’t convey any ideas to a person, except maybe that it’s a number)

Anyway, put me on the “instruction” side. When you write a program and run it, you’re not “communicating an idea” to the computer. You might be using the computer to communicate with another human, but ultimately it’s the other human’s native language in which the idea is communicated. We use words like “language” and “response” et al when referring to computers because we’re familiar with them.

Thanks to everyone for the great responses. I’ll send the link off to my friend now. :slight_smile: