Extremeely basic programming question.

You know what the compiler does with that?

It notices that “i” is only used for a loop, unrolls the loop for optimization and never even creates or initializes the variable “i” :slight_smile:

That isn’t what strongly-typed means. For example, C requires the compiler to check for valid data types, but it isn’t strongly typed. Neither is C++. They’re weakly typed, meaning there are ways around the type system, so you can break invariants the type system is mean to ensure. They’re also statically typed, which is the term you should have used there; statically typed means that variables have types attached to them, not values, so a variable can only hold data of one type.

Strongly typed means that certain interfaces cannot be breached, certain invariants must always hold. Python is strongly typed for its built-in types, in that you cannot (for example) introspect into a floating-point value and view it as a sequence of bytes, as you can in C and C++. Python is also dynamically typed, which means values, not variables, have types attached to them, so a variable can hold data of multiple types.

For the record, Haskell is strongly and statically typed. I don’t really know of any modern languages that are dynamically and weakly typed.

Well, in fact depending on what the program does, the compiler/optimizer might go ahead and decide either to:

  1. Not reserve any new memory and use a previously declared variables space. This is tied to the concept of a “live range”, if there are two variables i and j and i’s live range expires before j’s begins, the compiler may merge variables such that i and j are identifiers for the same internal variable.

  2. Never even put it in memory, if a variable is sufficiently short lived the value(s) assigned to it might just sit in a register somewhere, or in extreme cases the values, if sufficiently predictable, might become hard-coded into the generated executable. For instance

Load X into $1
add $1 and 5 into $3

Might become

set $3 to 7

If X can only be 2 (though for various reasons in some languages this may not be done in case the programmer needs to be able to directly alter the space of memory X is assigned).

But now we’re getting into nitpicky land and you’re right that what you said is probably generally true.

Sure you can, although it requires a function call:

I could accept an argument that C is weakly typed, but in my experience, strong- vs. statically- typed is nearly synonymous. I’ve never heard anyone besides you claim that C is weakly typed. Besides, the notion of strong typing is really a collection of properties that nobody will completely agree on. It’s better to just describe what kind of type safety mechanisms a given language implements.

Most scripting languages are both dynamic and weakly typed. Perl, for instance.

It’s actually not so clear cut. Strong vs Weak typing aren’t very well defined terms, some people say that any out of the blue, ad-hoc polymorphism or implicit conversion makes a language weakly typed, other people say that that’s a completely silly definition and say that there’s sort of a pecking order of strongness. This usually goes something like Haskell > Java/C# > C … (to use a really incomplete list).

As for a weak, dynamically typed language, I think Lua is.

“a is tied to the value of b, and will change as b changes.” Admittedly, it looks like their tests are insufficient for distinguishing this from the standard C-style model, so they’d still mark it as consistent.

I’m a bit late to the party, but here’s a slightly more up-to-date list:

Still makes interesting reading, and also makes you realise how much we depend on software to control our everyday lives, and just how hard it is to make truly reliable software. It does make me a little more tolerant of crashes in commercial software, because even simple programs that I’ve written have had subtle and not-so-subtle bugs which were very hard to diagnose and fix.

Really? That is bizarre. C allows you to, for example, create a floating-point value bit-by-bit and feed in into the floating-point hardware on your CPU.

OK, then show me how to typecast a Perl integer value into a Perl floating-point value in the same way you can in C.

Yes, but you have to be explicit about it. You can’t do it by accident. C# is sililar, except that it requires the entire block to be surrounded by “unsafe”.

You can’t do it via cast. But you can do this:
printf ‘0x%x’, unpack ‘L’, pack ‘f’, 3.14159;

result: 0x40490fd0

And then:
print unpack ‘f’, pack ‘L’, 0x40490fd0;

result: 3.1415901184082 (the slightly different result is because of limited floating point precision)

Since I know that the ones bit of the exponent is at 0x00800000, I can do this if I want:
print unpack ‘f’, pack ‘L’, 0x40490fd0 | 0x00800000;

result: 6.28318023681641

This doesn’t work for the true internal types, though, like hash tables.

To expand, though: that’s not the only way something can be weakly typed. Perl allows mixed types to work in a “do what I mean” fashion. For instance:
print 1.0 + ‘1.0’;

result: 2

Even though the second operand of the addition is a string, Perl decided that I was probably using it as a floating-point number, and converted it for me without so much as a warning.

Right - you are describing certain optimizations that a compiler indeed might perform.

My point was more along the lines that it’s part of a (good) compiler’s job to complain when the programmer does things such as attempt more than once to allocate memory to the same symbol name. As was noted upthread, a compiler whose mission is “Let’s see if we can make some sort of sense of this program” is likely to be less useful than one that insists on precision.

Do I recall correctly that in some languages Booleans evaluate to -1 or 0?

Well, it started out extremely basic…

In C there is (or was) no bool type. So programmers would use integers for data that actually just represented a boolean. But what integer values map to true or false? The most common definitions were to define FALSE as 0 and TRUE as either 1, or !FALSE (i.e. so any value that is non-zero would be considered TRUE).

However, as it was up to the programmer, different teams would have different standards. And yes, there were many problems caused by not every member of the team knowing what the standard was, or not being consistent, or plugging software together that had different standards etc.

I’ve seen FALSE defined as -1 and TRUE as 0. There was even a rationale for it, but I can’t remember what it was…

Maybe because the bitwise NOT of 0 is -1 (all 1s in binary)?

Yeah, when I learned C programming it was common for people to have a header file like “consts.h” to macro define

#define TRUE 1
#define FALSE 0
#define NULL 0

So code would read more like similar implementations in other languages. One prank of the “don’t leave your terminal logged in at the undergrad CS lab” was to surreptitiously change the value of a NULL #define there in someone’s private code to a non null value to cause memory pointer errors :slight_smile:

As for 0 being true, consider that most system process programs effectively do that. A successful invocation of a command returns 0, but any failure or warning has a unique error code to tell you what went wrong. If you view the idea of a function return of Boolean state as a special case of “integer as error code (0=no error)”, and want all the code to read the same (0=good, no problems, what the code expected), it’s a pretty natural convention.

That’s how the standard C string function strcmp() behaves, as an obvious example.

Ah, yes that was probably it. So it wouldn’t matter (here) if you mixed up bitwise and logical NOT.

Mk, I get career-change whims now and then (which I don’t act on) and I think this will be my next.

What would the process look like for someone to go from zero programming knowledge, to knowing a language well enough to gain employment? No degree involved; just pick a language, learn it, and start applying for jobs? Is that even possible? Which language would be a good done to choose?

Yes absolutely this is possible IMO. If you’re committed, I’d say about 3 months intensively studying evenings and weekends, followed by a further three months creating one or more of your own projects for a portfolio. If you can’t work intensively, adjust the timespans accordingly.

I mention a portfolio, as because you won’t have any formal or commercial experience, having some coding examples is basically essential. But programmers are very much in demand (yes, even now); so I don’t think that not having a degree, say, is that big a problem in itself. If anything, the danger is that you get hired for a job that you end up finding too challenging.

I personally would recommend Java or C#. Purists may be horrified by this as both of these languages have “managed memory” and so abstract you a little further from the system than, say, C++. I personally don’t think that’s a huge issue – abstraction is what programming is all about – and there are less confusing old libraries to avoid in these newer languages.

Java is multi-platform whereas C# is (basically) Windows-only.

Oh, and if you just want to play around and first discover if programming is for you, take a look at Processing. You can create an application to draw a circle with a single line of code, and it contains the editor and everything you need so there’s no faffing about.

It also depends on whether you’re looking for a job as a programmer or just a job programming, and what other relevant skills you have. There are, for instance, physicists, most of whose job consists of writing computer programs. Most of these physicists aren’t nearly good enough at programming to be able to get a job as a programmer per se, but the relatively ugly, suboptimized programs they write are good enough for their purposes. Example: Programs written and used by scientists often don’t have proper IO routines, but take their “input” by editing the source code and recompiling. Which is fine if the programmer and people he’s directly trained are the only ones who ever use the program, but you’d never be able to sell it.