Things that you can't fathom people not understanding

I don’t know anymore, but I did figure it out years ago for a freshman year programming assignment.

I always found recursion to be stupid easy in concept*, but a cast-iron bitch in execution. That’s why I wanted to mailbomb the dicks who came up with LISP a few years later and I had to use it in AI class.

*that describes a lot of academic things for me. Easy to understand the concept and its applications, devilishly hard to actually implement a solution or application using that concept. Most math classes were that way in particular- for example I understood WHY the derivative of a function was the velocity, and the second derivative was the acceleration (it’s clear as a bell when graphed), but I had the hardest time actually working out the calculations and artithmetic.

Back to the OP… I’ve never understood how people can have a super-accurate rule-of-thumb understanding of things, and be awesome at the application of that rule of thumb, but have NO understanding whatsoever beyond that. That’s not how my mind works; I generally have to have a conceptual framework of how something works that I can fit details and rules of thumb into, or I feel very uncomfortable doing/using whatever it is.

For example, I’ve known people who can cook extremely well, but if you asked them WHY a cast-iron skillet cooks a steak better than an old aluminum skillet, they’d have absolutely no concept of heat transfer, thermal mass, etc…

I also don’t get how people who are confronted with these rules of thumb and lore-type bits of knowledge, don’t get really super-interested in finding out why those things are what they are. I’m just super-curious, and really can’t quite fathom thinking “Hmm… I don’t understand how that works, but I use it every day, and I have no interest in learning how it works either.”

“How it works” is the luxury of somebody who doesn’t have to work for a living. (unless of course your job is to figure out how things work)

There’s a closed form for it that’s a little complicated to write without an equation editor – it’s on the Wiki page, though.

I don’t buy that; it’s not a question of luxury or not, it’s a question of curiosity. It’s not that they want to know, but don’t have the time, it’s that they genuinely don’t give a shit.

You can always fix that with a caching scheme :stuck_out_tongue:

Something like (in Go)



func Fib(num int) int {
  if num <= 0 {
    return 0
  } else if num == 1 {
    return 1
  }

  // Allocate a map with <num> spaces so the algorithm doesn't have to realloc
  // on every store (Only commented to clarify Go syntax)
  cache := make(map[int]int, num)
  cache[0] = 0
  cache[1] = 1

  return fibHelper(num, cache)
}

func fibHelper(num int, cache map[int]int) int {

  var c1 int
  var ok bool

  // If cache[num-1] doesn't exist, compute and store it
  // (I only commented this due to idiosyncratic Go syntax)
  if c1,ok = cache[num-1]; !ok {  
     cache[num-1] = fibHelper(num-1, cache)
     c1 = cache[num-1]
  }

  // Computing Fib(num-1) entails computing Fib(num-2) (at least for num>1)
  // So we can be sure it's cached
  c2 := cache[num-2]

  return c1 + c2
}



I’m not sure why you’d bother, but you could (at least that’s how I’d write it recursively).

I use my car everyday, do I care how technically an internal combustion engine works. Hell no. It’s not a lack of curiosity, its simply that I don’t need to. I’m not a gearhead. I don’t see that as a weakness.

Wow, my version is a substantial performance increase, and I think illustrates how bad naive recursion can be:

If you use Senegoid’s Fibbonnacci function (called SlowFib in there) for Fib(93) the server times out on the request*. Mine (Fib) computes almost instantly. Why Fib(93)? Well, my friend, 93 is the last fibbonacci number you can compute before you overflow a 64-bit unsigned integer :p. (In practice you can fix this with Big Integer packages, but I didn’t bother).

  • And yes, it’s not just the speed of both combined, even if you comment out the Fib line it will time out.

Oh…

Forgot to make SlowFib a uint (not that it matters computation-speed wise. Adding is adding no matter what).

Also, it looks like the Golang sandbox server caches program outputs so if it looks like it just times out almost instantly it probably just cached the [process took too long] message. On the first run it only times out after 5-10 seconds or so.

Actually, screw allocating memory for a map, if you have a language with multiple return values, just use a helper that always returns Fib(num) and Fib(num-1), then you never need to recalculate anything, and you don’t have to waste memory on a giant map.

If your language doesn’t have multiple return values (or something similar like tuples), you can just return an array of size 2 – or use some mild currying in a functional language.

All that said, even my “optimized” version is slower than an iterative solution. Using Go’s bechmarking framework computing a recursive Fib(93) is 430 nanoseconds per call, while an iterative Fib(93) is 110 nanoseconds per call.

The conversation about code going on is an example. I am not even curious how code works. I just want it to work.

I tried benchmarking the slow/“traditional” version the Senegoid presented with Fib(93). It doesn’t work, the test times out, even after setting the timeout threshold to over an hour. Kids, don’t use the naive Fibonacci. Ever.

In measuring the time that an algorithm takes to run, the ratio of the time to the size of the input is of greater interest, rather than the actual number of microseconds each operation takes.

Fibonacci numbers grow at a rate that is roughly 2[sup]n[/sup]. Computing fib(n) with a naive recursive algorithm will take time that is roughly proportional to 2[sup]n[/sup] whereas computing fib(n) with a simple iterative algorithm takes time that is proportional to n. Big difference.

Note that the recursive factorial algorithm doesn’t suffer this problem. The recursive method is a little slower than the iterative method due to the extra overhead of all the function calls – but both the recursive and iterative algorithms for fact(n) take time proportional to n. That is the major important measure of the time that an algorithm takes.

ETA: The important lesson here, which seems to be lost on a lot of CS students (and professors, I think!) is this: Recursive methods are elegant and dandy and easy to code (given a suitable language) and suitable for many kinds of problems, BUT you still need to know what you’re doing, and you still need to think about it, and you still need to understand your algorithm.

Too many people don’t. That’s why I wrote earlier that recursion is too often just a crutch for programmers who think that that’s all they need to know.

There’s a famous recursive function for demonstrating this, called Ackerman’s Function (several variants are shown). They all get really really big, really really fast. (ETA: I’m pretty sure we’ve had threads before that discussed this.)

I don’t recall specifically, but I vaguely recall that the recursive algorithm I tried (this was back circa 1971, on a supercomputer of the day), I was able to compute Ack(n) up to n=3 or maybe 4 with just a few seconds of CPU time. But it explodes beyond all reason at Ack(4) or maybe it was Ack(5), and allegedly would take centuries or something like that to compute.

IIRC, I stared at the function for a while, and worked through all the steps by hand (just as I recommended doing with Fact(6) and Fib(6) above), and found the iterative way to do it. With this, I was able to compute Ack(the next 2 or 3 numbers) very quickly. Beyond that, the numbers got waaaaaaaaaay to large to deal with.

ETA: Here it is: My big recursion rant, starting at Post #31.

Upon reviewing that thread, I see it goes into an extensive discussion of the pros and cons of recursion, with all the same examples: Fibonacci, factorials, Ackerman’s function, it’s all there. Even the exact words I wrote above, “get really really big, really really fast” is there. And some of the same people are participating in that thread, Jragon notably. We’re just re-hashing old territory, including my screed about recursion.

Right, though my “optimized” approach is also only Θ(n), just like factorial (each call makes exactly one recursive call, which buries its way down until n=1). It’s only the naive one that has a ridiculous explosion.

Of course, languages optimized for recursion bypass these arguments entirely. Obviously if you’re using Haskell you want to make sure you’re using tail recursion whenever possible, but in general performance arguments are going to be somewhat different than ones for imperative languages. (Of course, we can also get into arguments about whether Haskell recursion technically counts as recursion due to what it looks like at the assembly level or whether or not memoization defeats the point of the argument entirely since if you’re doing it you know what you’re doing and so on).

And yeah, I did test proportionality – my recursive function’s ns/op is generally a constant multiple of the iterative’s ns/op, the point wasn’t a constant speed increase it was that at as low a number as 93 (actually 60-something I think) it’s prohibitively long to run while the Θ(n) still takes an imperceptible amount of time. And actual ns/op is useful in practical situations. Big-theta is more important, but we use ns/op all the time in math packages to make sure we’re responsible managing extra-algorithmic overhead (extra time spent in allocations, etc).

I have never had an issue with someone that doesn’t know all the technical details of their car. That is what I’m here for.
What I can’t fathom is people that don’t know basic things about operating their automobile.
I get about one call a week from someone who can’t get the key to turn in the ignition. They are stumped for what they should do.
Really? Locking steering columns have been here in the US since 1968, and you have never come across a stiff key that requires you to move the steering wheel a bit so you can turn the key?
You must have led a charmed life.

It helps to realize that everyone is going to have a first time to find out that unpleasant fact of modern cars. The ones that call for help just found out about it. If they call again with the same problem, well, then it might be hard to fathom.

That’s the kind of thing that I’m talking about- people who don’t know enough about their vehicles to tell a dead battery from an empty fuel tank when their car doesn’t start.

I swear a lot of them think that there are tiny creatures under the hood in a hamster cage the size of a beach ball and they drink gasoline and fart exhaust fumes.