Extremeely basic programming question.

While you were there, did you look at his other essays? There’s a lot of fun reading there, and his writing style is entertaining. You might note that one of his essays is a screed on why structured programming hasn’t lived up to everyone’s highest expectations, and isn’t Og’s gift to the world after all.

Another essay, if you can find it there, discusses programming styles which will tend to make wrong code LOOK wrong (I think the actual title is some words obviously to that effect), in which I think he also discusses the “right” Hungarian notation, which as fallen somewhat into disuse, versus the “wrong” bastardized useless good-for-nothing version of Hungarian notation, which has taken the world by storm and is the version most of us are familiar with, especially in all Microsoft software.

Check it out!

You guys are just complaining about the weather.

Okay, I’ll post you a few links here:

Making Wrong Code Look Wrong May 11, 2005. Programming styles to improve reliability by making erroneous code stand and LOOK erroneous. Basically a screed on the importance of Hungarian notation, properly understood and used, versus the bastardized misunderstood disaster as it has become widely known.

Back to Basics Dec. 11, 2001. Screed on the problem with high-level programmers not having an understanding of the lower-level stuff going on behind the scenes (leaky abstractions, remember), with specific examples taken from the world of string handling, C style. (You know that’s going to get ugly!) Includes a variation of that string copy example that we’ve discussed above.

See his home page for a catalog of his other essays.

No! We’re actually DOING something about it! We’re educating Sicks Ate! :smiley:

[sub]Right, Sicks Ate? :)[/sub]

I would like to, as a programmer, apologize for all programmers on forums everywhere. Every discussion of our field devolves into these little debates over trivia, and no error no matter how slight can pass without being correct by some “helpful” poster.

I have a lot of trouble getting along with programmers.

That’s not a bug; that’s a feature!

Being able to catch those little, subtle errors (which all too often really do matter) is an important ability for a good programmer to have.

I actually agree to that, the problem is that they lack the little mental filter that asks, “wait, this is a Q&A forum, does it really matter if solution A doesn’t work in extremely-rare case B? Is it worth typing a post to correct that minor oversight instead of discussing something more important?”

Or they do have that filter and the answer comes back as “yes”.

Anyway, I’m not trying to be too negative about it, it just bothers me because I feel it fills the thread with much more “noise” than it does “signal”. Most of the text in this thread is about C “tricks”, or debugging how to swap two variables in VB, and very little of it is actually about the art and practice of programming.

:smiley:

Blakeyrat, Blakeyrat. We’re on PAGE 4 of this thread already! The subject of the original OP has long since been treated thoroughly. We’re just having fun now! (Some of us, anyway.) The subject of this thread has diverged in ten different directions, both serious and light-hearted! Yes, we’re getting into trivia. Yes, we’re talking about C tricks, shabby and otherwise. And we’re also talking about the art and practice of programming in various ways. Everybody needs to know how you can swap variables without a temp using Ackerman’s function! One issue I’ve kept harping on is this question: How much really low-level bit-pushing should a high-level programmer actually be expected to know, in order to program successfully? I’m an old assembly programmer hand, so I see things at that level. And I also see how badly high-level programmers can mess up – as in that other thread I linked, about adding up 0.1 + 0.1 + …

Thudlow Boink is right. It’s important because it comes up a lot. That 0.1 + 0.1 + … problem arose in financial software, and caused errors all over the code because the programmers didn’t understand what was going on. And for programmers who think var-swapping without a temp is slick: Do you understand how badly that’s going to fail if you try it with floats? And do high-level programmers understand how really really bad that string-swapping code is that I wrote? (I just thought of it on the spot just then.) The abstraction (all a programmer is “really s’posed to have to know”) is that strings are stored in those variables. Of course, it’s going to take up even MORE memory space to create the extra combined string!

[sub]And don’t even get me started on what a bad idea recursion can sometimes be, for programmers who were taught to think of it as a solution of first resort for almost everything, which seemed to be an especially big fad in the 70s and 80s.[/sub]

Here’s a serious question for you all to need to know: Do string variables simply contain pointers to a string that’s really somewhere else? How fully does your language “hide” that fact? If I write this:
A = “First String” ;
B = A ;
What really happens? Does B = A ; create a complete copy of the whole string and make B point to that? Or does it just point B to the same string A points to? What if I modify the bytes of A so the string is “First Spring” ? Does your language even have a way to do that? – And then print B? Does B see the new text too, or does it still say “First String”?

In a language I used (Clipper), it was far from obvious, and it was far from obvious how to perform an experiment that would even answer that – and it wasn’t entirely obvious if there was any reason it would ever matter! (I suspected there was a leak there that could cause problems, but I didn’t find a way to expose the leak, and I didn’t spend a whole lot of time trying.)

My main justification for the recent page or two of posts in this thread is that we’re having fun kicking this shit around! [sub]And look how much Sicks Ate is learning! :stuck_out_tongue: [/sub] For this, Blakeyrat feels it necessary to apologize! [begin mild snark] He must be a middle manager suit [end mild snark] :stuck_out_tongue:

ETA: The determined Real Programmer can write COBOL programs in ANY language! :cool:

Although I know what you are talking about in general, in this case I don’t really understand your objection. OP was answered in first few posts, after that programmers have been exploring some programming topics, which is fun for most programmers.

Yes, and I’m…delighted? :dubious:

Rightright. I glean what I can, and glaze over at what I can’t…I just summoned the courage to dive in to the last page-and-a-half!
So, to start yet another tangent, or perhaps to go back to a previous one (I’ve lost track)…

How similar is learning a programming language to learning a spoken/written language?

Since the thread started (and has recently gotten back around to) discussing a test that would identify ‘natural’ programmers, I remembered another, similarly purposed test: the Defense Language Aptitude Battery. The purpose of the DLAB is to see who has a knack for picking up language rules.

From what I remember, it starts out very basic…telling you what properties a verb has, for example. Then it tells you what properties the subject of a sentence has. Then it tells you how a subject acts in relation to the verb etc., all done with a made-up language. I remember they just kept adding on rules and it got very complex towards the end. Fun, though.

Now that I’ve typed this, I tend to doubt that there’s much crossover.

That essay about Making Wrong Code Look Wrong is fascinating, because I recently found myself re-inventing something similar to Hungarian Notation. I was maintaining a big, ugly, ancient Fortran code, and decided at one point to convert the code to RTF so I could put in color-coding (I wrote a little script to convert it back to plain text before compiling). Everyone I mentioned this to said “Why? There are editors that will do color-coding for you automatically.”. Yes, but not the same color coding I wanted. See, most of the variables represented physical quantities, but some variables were the quantity itself (in some units), some were the base-10 log of the quantity, and some were the natural log of the quantity. And it doesn’t make sense to do things like set a linear variable equal to a logarithmic one, or to take the log of something that’s already a log, and so on. And if the code had been written with a solid naming convention like that in the first place, it’d have been a lot easier to maintain.

We’re still waiting for unit testing to finish on that one. Will have to get back to you.

Emacs could probably do what you need, especially if the variables are uniquely named across the project, or even just within each source file. It wouldn’t be too hard to set up, although it sounds like you’ve already got a system you like. (And, you might not care for Emacs.)

Probably not, but your DLAB example isn’t crazy. Both types of languages have vocabularies, syntax, and grammar. In one regard, computer languages are easier to learn than natural languages: the grammar rules are more rigid, the vocabulary is much smaller, and normally there’s only one possible interpretation, at most, for the various phrases and statements you can make in the language.

On the other hand, the two types of languages obviously have very different purposes, so learning French is never going to be much like learning Forth, say. As with natural languages though, it is true that once you’ve learned a couple computer languages, learning more of them is easier than it was learning the first one.

I’d say it’s a lot easier. The vocabulary is much smaller, although sometimes cryptic. The syntax is much more regular, and rarely is there ambiguity. But really when you learn another natural language, you aren’t just memorizing a dictionary, you have to learn to use the language to read, hear, speak, and write. Writing code in a programming language is not like learning to converse, it’s more like learning structured composition from a really strict and pedantic teacher, using a limited vocabulary. In addition, the subjects being discussed in this thread highlight the depth of complexity involved in the semantics. The brief dictionary type definitions for commands may suffice more the vast majority of programs, but the problems come from the long detailed functional definitions, and each of those may be considered an equivalence to an incomprehensible regional accent. And this only addresses the core part of languages. Most are dependent upon function libraries that can be quite extensive, and those combine the regional accent problem with a lot of idioms. So there can some figurative parallels to natural language, but the difference is that you can’t just get by with enough to make yourself understood to a computer. Your vocabulary, grammar, and usage have to be impeccable.

On the other hand, everyone can learn at least one human language, but not everyone has the aptitude to learn any programming language, so in that sense, learning a programming language is harder.

And we’re getting rather far afield-- Nobody in this thread is talking about programming in Beginner’s All-purpose Symbolic Instruction Code. :slight_smile:

You could make the mistake of doing wrongful assignments like that, because all those vars are simply floats (or REAL, in Fortran-speak), and the compiler doesn’t know one from another. That was a part of Spolsky’s point, with his example of “safe” and “unsafe” character strings.

In some modern languages, you could create your own distinct user-defined data types (or class) for each of those. Each would simply be a float (or REAL) behind the scenes, but the compiler would prevent you from assigning one type to another, unless you used explicit conversion functions.

I think you can do the same thing in some versions of SQL (I’m thinking of Microsoft T-SQL) – You can define your own data types for database fields. You could have a “custname” data type just for customer names, “custid” data type just for customer id code, “productid” data type just for product id code, and so forth – even though, behind the scenes, they are all just char strings. And another useful result of this, is you could define, say, data type custid as char(20), and then EVERYWHERE you create a custid field, it will automatically be a char(20).

Ah, but do you recall the early days of Beginner’s All-purpose Symbolic Instruction Code. :slight_smile:
(the :slight_smile: was actually part of the language name, you understand!)
where all variable names consisted of a single letter, or a single letter followed by a digit?
This gave you a total of 286 easy-to-remember variable names, so you didn’t have to mess with Hungarian, Polish, or Swahili naming conventions. And ALL of those variables were global, and you couldn’t pass any arguments to subroutines. You just put the right values into the right variables and called the subroutine. And the original teach-yourself book by Kemeny & Kurtz was only about 100 pages long? Those were the days!

I’m going to opine that learning a programming language doesn’t have lot of overlap with your innate language skills (other than your ability to read the 1200-page beginner’s teach-yourself book!) but I think learning programming has a lot of overlap with your mathematics skills.

Programming – designing the algorithms, the program logic, keeping track of all those variables, and all that stuff – takes a heavy dose of logical thinking and symbolic manipulation skill, and abstract thinking (especially for modern high-level languages). I think those are largely similar to math skills.

The brain has its specialized areas for language skills, and separate specialized areas for the logic, numeric, and symbolic skills. (They are even in separate sides of the brain, aren’t they?) I think programming tends to exercise the logic/numeric/symbolic circuits more than the language circuits.

There’s been a language feature I’ve wanted for a long time but never seen: proper support for physical units.

Units are awesome. They tell you if the computation you’re doing makes sense or not. If you’re supposed to come up with a speed and instead have m/s^2, you know you did something wrong. If you try to take the square root or logarithm of m/s, you know you did something wrong. And so on.

There should be a language that enforces unit correctness at compile time. Yes, it’s possible to get a fair amount of unit support via objects, but that’s just not the same IMHO. I’ve played around with C++ templates a bit in this direction–they seem like the best bet for this kind of thing–but so far haven’t succeeded. You really want a language with built-in support.

In retrospect, it seems like Fortran should have been the language that does that. It was designed for scientific work.

Okay, I just got it! The OP is specifically asking about that – It’s right there in the very title of the OP! :smack: And we’ve hardly said anything at all about it.