A question about computer programming: What's wrong with mutable variables?

I don’t know basic, but in C:

int addOne(int x) {
return x + 1;
}

int countNTimes(n) {
if (n <= 0) {
return 0;
}
return addOne(countNTimes(n-1))
}

int main( void ) {
println("%d", countNTimes(3));

return 0;

}


Crap, code tags:

Edit: This is where passing functions around gets handy. You not longer need a function that adds 1 to a variable. You instead have a value that takes in a FUNCTION and adds 1 to whatever that function returns. That’s exactly what my ill-formed Succ example above was doing.

But for my example, (I didn’t explain this clearly), it’s supposed to be that it returns the right output after a series of “count ONE times” function calls, so to speak.

Like, suppose we need to count the number of times something happens, as it happens. Such that we can’t just say “count n times” for whatever n we like, but we need to say “count one more” each time the thing happens.

I believe you. My conundrum is right now I don’t see how to not return a complete new state every time.

That’s not quite true, let me clarify. I can see ways to do it using deltas (just as someone else mentioned) but that doesn’t seem to satisfy the “no mutabilty” rule that some people claim should apply. With deltas, something is changing. (Right?)

I am not sure I am understanding this. I was asking why (and how it is possible to) not use mutability, but you seem to be saying the opposite–that you should avoid immutability. Did I misunderstand?

Okay so this brings something up that’s related. When I went to stackoverflow.com to ask a few related beginner’s questions, the help I got was in the form of telling me about some method or other in the Scala library that basically does eveything I wanted done under the hood. This is fine from an actual “how to solve the problem” standpoint, but was uninformative from the “how do you actually do this?” standpoint. But I began, in any case, to get the feeling that avoiding mutability really just meant passing mutability down to some below-the-hood level not typically accessed in the code itself. Is that what you’re saying here?

How can I put this? You’re not going to get a global counter function. That you can call from anywhere and have it increment 1. Here’s an example in Go, which is NOT a functional language, and is very C-like – but has functions as first-order values and can still demonstrate it:

If you want a counter function, you have to pass it “down the line” into other function calls.

Well, that’s how it seems to me! But people tell me you never absolutely need mutability. This is exactly the kind of example, though, where mutability seems necessary to me, hence my puzzlement. Are you saying the solution to my puzzlement is to realize the claim “you never actually need mutability” is exaggerated?

This is the part where it’s difficult to explain unless you’ve really, really toyed with Haskell or have a theoretical math background.

No, you cannot do everything you can do with mutability without mutability. However, you can always do something functionally equivalent to what you’re doing with mutability.

In the case of things like global counters, this means passing around functions of functions of functions that take functions and return functions and… so on. You can never call

count()

Somewhere and have it set a value somewhere in the nether. That’s impossible. What you can do is pass around a function that returns another function that adds 1 to some function’s value. So to call this from anywhere, you have to pass that function into other parts of your program, so it can be called when needed.

Think about the structure of your program: imagine, instead, that count(x) instead returned x+1. How would the structure of your program change? You can just pass “x” everywhere, and call count(x) every time you need its functionality, and pass the new result afterwards.

I think the difficulty here is that there’s no real example. You presented a global counter and said “you can’t do that with mutability”. But we have no idea what you’re counting. If you presented a real problem to solve, that happened to also require counting, this would be easier to illustrate.

Because, as I showed, you can count to 3 in a functional paradigm. But you (rightfully) don’t accept it, because it’s not used how you would use a real counter in a real programming problem. I wrote the same program you wrote, in that it produces the same output, but it doesn’t really “prove” much because your real assertion was that you could use this counter to count things from some random part of the program.

The point is, in FP, you CAN count something during the course of the program, but lacking an actual problem to solve that involves this, it’s difficult to show why this is possible.

I think I understand you, actually.

I think it turns out I’m failing to discuss a well-defined function with my “count” example. It’s like I’m saying “I need a function that adds one n number of times,” but am refusing to say what “n” is. When you give me code that “adds one n number of times” for all n (thereby defining n, so to speak, through quantification), I reject it, saying, “no, I just need it to add one n number of times.” At that point the correct reply is that my function description doesn’t determine any well-defined function, because “n” is left open.

This helps, I think, believe it or not.

I’m also starting to think the appropriate response from me to all this should after all be “don’t worry, be mutable til it starts causing you problems.”

Hmmmm… Every example I can think of that seems to “require mutability” to me involves a user interrupting the operation of the program. (User inputting values, or user telling program to stop counting, or something along those lines). Or having something in the program itself potentially changed each time its run.

Like for example, take this pseudo-scala-code:



val n = 1,000,000
var c = 0

for (i <- n) {
    if (n is prime) c += 1
}

println(c)


So what I’m hoping for is to be able to have something with no vars in it, that counts primes like this, no matter what positive integer n is set to in the first line.

The solution I get when I ask about something like this elsewhere is usually along the lines of:

:smiley:

If that really is what people mean by “you never strictly need mutability” then okay, but I guess I thought it was more of a theoretical claim, not just a claim that “there’s always an app for that” so to speak.

Assuming you have a function called prime that determines whether a number is prime or not. In Haskell, you do this


length (filter prime [1..1000000])

“Filter” is a function in functional programming languages that:

applies a boolean function to every element of a list and for every element that it returns true on, it is in the new list. Then you just take the length of the list.

If you want a primality test in Haskell, look here.

Equivalent in Scala:


def countPrimesUpTo(n:Int) = (1 to n).filter(isPrime(_)).size

:smiley: So basically you’re saying what I put in the quote box in the post just previous to yours.

Okay, I think I understand. I think I’ve misinterpreted the claim that you never strictly need mutability. I took it as a theoretical claim about the logic of how things work in programming in generally, but it sounds like people maybe actually just mean to say “there’s an app for that.” Whatever mutability-needing function you have in mind, somewhere in the library you’ll find something that handles that stuff under the hood for you.

The more theoretical and universal claim about computer theory is probably true, too, for reasons I think I understand. (Mutable variables are just a way to calculate in steps something that can in theory be all completely defined in a single formula with no mutable variables in it.) But not in a way that actually affects any code I’ll be writing any time soon.

Or, as a wise man once said:

I think you both overstate and understate the situation. All of the methods in my example (“filter”, “size”, and implicitly “to”) can be implemented simply without any vars in them, so in that sense avoiding var is not “purely theoretical” and can be practically done. (You do need to understand recursion well, though – do you?)

But, as was illustrated, for a lot of list/set/map operations, it pays off to know the higher order functions on collections (map, flatMap, filter, zip, &c) as a hell of a lot of things can be expressed in terms of these functions. In fact, when you write a for loop in Scala, the compiler is actually translating the loop into successive calls to these functions. Using these functions can get you a lot of mileage.

Every time I see an example, I can work through it and understand it, but I’m not to a point where I can immediately grasp most examples at a glance.

Are you able to show me an example like this “count” thing I’ve been after in the last few posts? It doesn’t have to be about finding prime numbers specifically.

I totally understand it pays off to understand the higher-order functions. I just have a personal interest in understanding the underlying logic of all this.

Actually, I believe it’s not just “a lot”, I think map and reduce are sufficient to be turing complete; as long as you allow function definition and recursion as well.