Extremeely basic programming question.

In this thread about self-studying for a career, is a link to this article, about how some people have an innate ability to grasp programming, and some dont.

In the article is this problem. IANAP and know little about programming, can someone provide and explain the correct answer?

Read the following statements and tick the box next to the correct answer.

int a = 10;
int b = 20;
a = b;

The new values of a and b are:
a = 20 b = 0
a = 20 b = 20
a = 0 b = 10
a = 10 b = 10
a = 30 b = 20
a = 30 b = 0
a = 10 b = 30
a = 0 b = 30
a = 10 b = 20
a = 20 b = 10

You overrode the value of A when you assigned the value of B to it. So both A and B = 20.

Try this:

My name is Bob.
His name is Jim.
Now my name is his name.

What is my name?

The answer should be a = 20; b=20.

You are setting the int variable a to 10, then b to 20, then setting a = to the value of b which is 20.

So both variables are now set to 20.

Edit: What I said after this really wouldn’t apply.

Oh ok, well that seems pretty simple.

That doesn’t really clear it up… in fact it suffers the same problem of the original question, it assumes that the left-hand variable is always the assignee. It just so happens that most computer languages are like that. English is definitely not like that. So this question really assumes that you’ve seen enough computer languages that you’d assume they are all right-associative.

a = 20 b = 20

is the answer, assuming that the language is C, C++, Java, C# or a similar curly-brace language.

What is very important and that is not covered at all in this question is the difference between reference types/pointers and value types.

I think the intent of the question, from the context of the article, was to give the student a very basic framework of programming principles and then see if they could properly apply them.

I might poke around and see if I can find the rest, later.

I can see how someone with no programming experience would have no idea what’s going on. But it doesn’t address one’s ability to program. If I were presented a sentence written in Chinese I would have no idea what it meant, but that doesn’t mean I couldn’t learn to read Chinese. The real test would be to provide a set of clear instructions for interpreting the code, and see if someone can follow the directions to understand the result.

One could certainly create a language where the statement “a = b” would be interpreted as “assign to b the value currently stored in a” instead of “assign to a the value currently stored in b”. Such languages are surely in the minority, but there are probably a few of them that exist, and aside from convention, there’s no reason one would be inherently superior to the other.

Now, given that the convention does exist, it’s good practice to stick with that convention, whatever it is. But the thing about conventions is that you have to actually be taught them: Knowledge of the conventions isn’t and can’t be something that you just have an aptitude for.

If OP is reading and studying at the very-beginner level that he implies, then the object of that lesson (and the question) could be something very basic – This is something we beginning programmers had to have beaten into our skulls on Day One of our Beginning FORTRAN II classes.

The equal-sign = when used in an assignment statement like this means do these steps in this order:
– Compute the value of the number, variable, or expression on the RIGHT side of the =
– Then, store that result into the variable on the LEFT side.

In particular, what does this sequence of statements mean?

int i ;
i = 10 ; /* or any initial value */
i = i + 1 ;

Assuming the student knows and is accustomed to regular Algebra, the statement: i = i + 1
seems non-sensical at first. What could it mean? Just try solving for i and see if you can do it!

The = does NOT make a factual statement or claim about what is true of the expressions surrounding it, as it does in Algebra. It is an imperative verb, commanding the computer to do the steps outlined above.

To make things worse, the same equal sign was also used as an interrogative state-of-being verb in creating conditional statements, such as: IF ( A = B ) . . .
which compares the values of the expressions on both sides of the = and directs the computer to take one of two different actions according to the result.

This dual usage of the = symbol was a thorn in the butt, in the minds of many programming language designers, and indeed caused problems in designing modern languages and compilers. Accordingly, in many relatively modern languages, two different symbols were used for this purpose.

Thus, in Algol and its successors, := was used as the assignment operator, while = was kept as the conditional test operator.

OTOH, in C and its successors, = was retained as the assignment operator, while == was invented to be the conditional test operator.

Modern languages let you do obscene things, like assigning a computed expression to multiple variables in one statement: i = j = k = k + 1 ;
or even assigning to a variable inside an expression, as in:
if ( i = ( j + k ) > 0 ) . . . /* Grok that if you can! */

This led to abominations like: a* = i = i + 1 ;
What the hell does THAT do? If, say, you just increased the value of i from 4 to 5, does it also store that 5 into a[4] or into a[5]?

That was interesting, Senegoid.

I’m not really studying programming, I just ran across the article and thought that the premise was interesting.

I totally agree with TriPolar. I’ve been programming in BASIC since the late 70s, and was totally thrown by the syntax of the OP’s example.

My first guess was to understand “int” as the function “integer”, and that the parentheses were left out for some reason. Thus, I translated the first line as “a is a number such that the next lowest integer is 10”, so that a would be somewhere between 10.0000 and 10.9999. Only after pondering it for a bit did I conclude that “int” is a verb for setting the value of a at 10 (and of b at 20). But if so, then why is “int” missing from the third line?

Can someone tell me what “int” means, and which programming language it is?

More to the point of the OP’s REAL question, which is about “how some people have an innate ability to grasp programming”, I’ll tell about the entrance exam for a programming school I attended: The exam described a stack of items, possibly playing cards, describing each item in the stack. Then there was a set of instructions in very plain and clear English about how to rearrange the items. We were then given some time to figure out the process described by the instructions, and told to write down the resulting sequence of items. That’s what programming is about – understanding and following instructions.

Re-read it again. I just fixed some typos, including one somewhat significant one in the very last sentence.

I wondered about that and re-read it a couple times intially, but then figured I just didn’t know anything about programming :smiley:

int is a data type used when declaring a variable in a strongly typed programming language like C or C++.

You’re saying that variables a and b are going to be 4 byte integer values. You could have declared a variable c of type bool, and that would have meant c will be assigned 1 byte boolean values, for a variable d of type double, which would be an 8 byte double precision floating point value.

It’s not a function or an operation, it’s just specifying the type when declaring the variable.

I don’t suppose you (or anyone) has an example of this exam?

int = integer. It’s a variable type, or object which defines the type of data stored in it.

It is present in what are called strongly typed languages. Languages that require the compiler to check for valid data types before running.

In this case with int a = 10 you are telling the machine to create a variable that is meant to hold data which fits the type “integer” in memory and hold the value “10” there.

This is valid because 10 is an integer.

Maybe I shouldn’t have used quotes around the 10 because int a = “10”; would produce an error, as “10” would be interpreted as a string, which is definitely not an integer!

EDIT: And I’m late to the party as usual :frowning: I blame my poor typing skills.

(Bolding added.)

“int”, which appears thus in MANY modern programming languages, “declares” a variable.

In modern languages, one must explicitly state the names of every variable you intend to use, and in most such languages, also state what type of data will be stored in that variable. Once a variable is thus declared, you are committed to using that variable ONLY for that type of data. All variables must be so declared before they are otherwise used.

The statement: int i ;
simply declares that the variable i exists and may contain an integer value and never anything else. You can declare multiple variables on one line, e.g., int i, j, k ;

Many modern languages let you declare a variable and assign an initial value to it on one line, thus:
int i = 5 ;
but to assign any value to the variable at any later time, you just write:
i = whatever ;
without re-declaring it with the int keyword.

A bit of histoire: In olden languages (FORTRAN, to be specific), you didn’t have to declare a variable (but you could). When the translator saw any variable name for the first time, it simply declared it automatically. This led to a problem that was not considered serious in the early days, but eventually WAS considered serious: Anywhere in a program, you might inadvertently spell a variable name wrong, simply as a “minor” typo. This caused the translator to invent a new variable and use that, rather than the intended variable. Oops. Well, once upon a time, the worst that might happen was, your program got the wrong answer. And such bugs could be very hard to find.

Fast forward a little bit. Computer programs began to be used for serious work. Critical work. Work where people’s very lives might depend on programs working right. Computers controlled industrial processes, high-tech medical machines, manned spacecraft. One little bug, and PEOPLE DIE! This is NOT hypothetical – It has happened! (Give me a moment, and I’ll find the classic link for you to read.)

Computer scientists began to devise new ways to design languages, so that the rules of the language would help programmers to write correct programs. One of the big innovations was the requirement that ALL variables must be explicitly declared before being used. With that rule, the translator (compiler) could detect any misspelled variable and warn somebody.

As for the bolded line in the quoted text above: Understanding and following instructions is really just an exercise. The real lesson for newbie programmers to learn is how to DEVISE and WRITE clear, unambiguous, and precise instruction that any semi-brainless idiot (to wit: the computer) could mindlessly follow, to get the right answer.

As it happens, I just read through the same links…

That test is not looking for a “correct” answer. The test is looking for consistent answers, from a bunch of similar problems. If you always use the assign-to-left rule, that counts as consistent. If you instead guess that the rule means something else, like assign-to-right as Chronos mentions, and answer all of the questions accordingly, you’re still being consistent, and the authors of the test think that you could easily learn the correct rule for any particular language.

I think the idea is that the ability to consistently apply a simple rule is required for programming. If you can’t figure that out, you won’t even be able to figure out why your program doesn’t behave like you expect.

Now, my WAG is that there’s something to that idea, but this test isn’t perfect. A lot of people won’t be able to get past “10 = 20”, as it’s completely counter to every prior use of the “=” sign. Some of the people branded “hopeless” by the test would probably figure things out with different syntax, or a simple explanation of assignment.

As a total aside, I’ve been teaching myself R, and I really appreciate the <= assignment operator. Just follow the fucking arrow…

a = b could also be interpreted as the claim that a and b are already equal. It would evaluate to 1 if that were true and 0 if it weren’t. For example 5 + (a = b) could be 5 or 6, depending.