# Need math problem to crash 80's CPUs

This is for a story. I want to have someone try to have a retro, 80’s era CPU be overtaxed/overheated or something by an elegant and/or short math problem being fed into it.

But, I don’t know that much about programming so I don’t know 1) what a formula would look like, or 2) what 80’s CPUs would do when fed a short/simple formula that they can’t finish. Is it possible to give the CPU a valid problem without a syntax error, that it can’t compute not because it doesn’t have enough memory, but just because it would take too long? Is it possible to give the CPU an instruction it would overheat attempting?

If it must run out of memory, is it possible to write the program in a way so that when it runs out of memory it will automatically attempt the same program again, over and over again?

Extra points for elegance, being not too long but not too short like “divide by zero”>syntax error, and describing what the CPU would do in detail… I’d like the CPU to be just a chip on a wall and with maybe a dislay, not just a C64 hooked up to a TV, so barebones would be awesome.

Mods please move this to the apropriate forum if it shouldn’t be here.

CPUs don’t work that way. Every program must be eventually reduced to a series of simple memory and arithmetic operations which are computed by the CPU, regardless of complexity.

If a program has a lot of stuff to do, it will take longer to run that one which does not have a lot of stuff to do.

Here is a simple program that a CPU can’t compute because it would take too long:

1. Set the value of X to 1.
2. Set the value of X to the logical negation of X.
3. Goto 2.

Running this won’t overheat your CPU; it will just do the same thing over and over again as long as you let it. Or until it breaks.

The point is, as far as the CPU is concerned, one problem is indistinguishable from another; it doesn’t matter how simple or elegant the problem is, it’s just bits and arithmetic operations from the CPU’s perspective.

But since it’s for a story, the correct answer is, “Computer, what is the meaning of existence?”

Thanks for your input. I am looking for something similar to “what is the meaning of existence” but something insurmountable.

What about calculating pi to as many decimals possible until the system runs out of memory, is it possible to have the program restart if it runs out of memory or does one need a “cheat” where it reserves just a few bytes of memory it needs to detect the rest of the memory is used up and then reboots? (If that’s possible, how much is a realistic amount of memory reserved for this?)

Ideally if I must use an out of memory problem, I’d like something sexier and less cliche than calculating pi to X decimal places if anyone has any other suggestions.

Like friedo said all you need is an infinite loop to keep the CPU chugging but that really won’t break it anymore than any other operation will.
The old trope about giving a computer an unsolvable problem and making it go BOOM just isn’t reality.

Friedo is absolutely correct. They don’t work “harder”, they just work “longer.” There are weird exceptions, but they’re because of physical defects, not something that you can plan on exploiting.

On the other hand, you’ll probably find the concept of the “Halt and Catch Fire Instruction” interesting! https://en.wikipedia.org/wiki/Halt_and_Catch_Fire

Mandelbrot sets were all the rage in the mid - late 80s. But business and enthusiast CPUs could still get by with passive cooling and overheating was not a problem.

Now that you mention it a memory leak would work but that’s not really what you’re asking.

Is the story set in the 1980’s, or is it an old computer?

In 1975 Martin Gardner published his “Hoax integer” e[sup]π √163[/sup] which is ridiculously close to, yet not equal to, the integer 640320[sup]3[/sup]+744. Perhaps your character tries to test whether the hoax integer is larger or smaller than its nearby integer, a task likely to overtax a computer since its first 12 digits to the right of the decimal point are .999999999999.

(I have some boring stories of real incidents from a previous lifetime. For example, when early 370/165’s were retrofitted to support the MVCL instruction, some couldn’t handle the activity and their power supplies crowbarred off!)

It’s set in the present day, old computer.

What do you mean by “overtax”, the CPU will just convert the .99999’s to one, or run out of memory before getting past the 12 digits or something?

Real examples are great!

The trick is to break the fan. Even some modern chips have very bad problems if you take off the heatsink fan. I’ve seen videos from energy and CPU research specialists testing various processors, AMD processors are notorious for not self-regulating temperature and literally catching fire (modern Intels are apparently very difficult to make this happen with, because they regulate speed based on temp, even shutting down when they get too hot).

What you’re asking isn’t really possible in reality with cooling. However, if the character were to, say, remove the fan or accidentally break it, any infinite loop should eventually cause overheating and, if you’re lucky, the computer will catch fire before it stops working and just shuts down. There are probably ways to make this happen quicker, such as feeding it instructions that stress the processor such that the pipeline is working at full capacity.

Processors have a set of low level instructions, and different instructions naturally take more or less time to process. To keep speed up, the concept of “pipelining” was engineered. This makes it so that different operations, such as arithmetic and memory access, happen in different “parts” of the processor, and are controlled in way such that several things can happen simultaneously. I reckon that, given absolutely no heating, an infinite loop that constantly kept every section of the pipeline busy at all times would be much more likely result in catastrophic failure than a simple “add two digits” program.

Unfortunately, my lack of real practical experience in this field means I can’t suggest an algorithm would do the trick. My memories of a stereotypical pipeline are incomplete at best, and pipelines are frequently complicated by memory instructions as it is (since, iirc, on some processors for a few memory instructions it will have to do arithmetic, then memory access, and then go BACKWARDS back to the arithmetic stage again, which is why certain load instructions are so slow computationally, since they also have to “reserve” the next computation to make room for the backwards step).

Would an infinitely recursive function cause a CPU fault by overflowing the stack?

A stack overflow is a software error. You’d just crash the program (or the OS as the case may be.)

It’s impossible to tell, and depends on how the language is optimized. At the assembly level a stack overflow is just a specific type of out-of-memory error. Whenever you call a C function (we’ll assume we have a naive compiler and there’s no leaf function optimizations or in-lining here), the program will allocate memory sufficient to hold all the variables in the scope of the function. An infinitely recursive function in a language structured like this (with no optimizations) will likely overflow the stack before it overheats. Generally what happens is the stack grows “upwards” and the heap grows “downwards” and when the stack and the heap meet, for whatever reason, if the program tries to expand more the kernel says “WOAH THERE BUDDY! That could break things pretty badly, I can’t let you do that” and kills the program. The only real difference between a generic out of memory error and a stack overflow error is that a stack overflow error results from an improper growth of the stack frame into the heap and an out of memory error results from improper growth of the heap into the stack, but it’s mostly academic in the end.

That’s not what most of us are talking about, we’re talking about an infinite loop that doesn’t rely on expanding memory. The simplest version of this (albeit a version that’s so easy to execute it’s probably not likely to overheat anything) is:

0x400000 j 0x400000 (aka 10 GOTO 10 or whatever you want to call it).

This can literally execute until the silicon breaks or the power goes out, since it never relies on more memory being needed.

It should also be noted that “stack” and “heap” down at that level are pretty damn arbitrary and are almost entirely convention (just like everything else, which registers are caller and callee-saved? What does your supervisor say!?). If you’re doing assembly level programming you can allocate memory wherever and however you damn well please. Well, as long as it’s in the memory footprint the kernel allots you, at least. The kernel tends to get very cross and will tend to kill your program if you try to operate outside your memory footprint (or even worse, try to write in the kernel’s personal reserved memory or register locations). I guess you can also make it throw a hissy-fit (depending on the architechture) if the memory address you’re writing to isn’t sufficiently aligned with the size of the thing you’re writing (i.e. trying to write a .word after a .byte can get messy in MIPS, hence the .space instruction).

But would an infinite loop be enough? Computers in the 80s still had regular IRQ events, so the loop would occupy the CPU, but it would not be like it was completely frozen. I know from playing with a Commodore that there was one unmapped instruction in the 6502 (0xA2) that would irrecoverably freeze the CPU so that it would not even respond to NMI, but that was more like 70s era hardware.

Were some of those PCs set up with I/O that would allow the CPU to shut the fan off? Seems like that, or misconfiguring some controller chip might do the trick.

With the cooling working it would likely be hopeless. Well, okay, I’m sure if you’re running a super computer with nothing but case-fans outside in the Arizona desert at 2PM in the middle of the summer you can get some pretty disastrous results, but still, the point is that cooling is the important part.

Like I said, I don’t think a simple infinite loop like 10 GOTO 10 is going to do it. I’m pretty sure overheating a computer to the point of catching fire, even an uncooled one, is generally going to take some effort. I still think that if you engineered a program that literally flooded the pipeline and made sure that at every instruction each stage of the pipeline was doing something (which is way easier said than done, even at the assembly level the computer will throw invisible no-ops in there like mad during execution to preserve its pipeline) you might have a shot.

Though I admit I am assuming we’re not talking about a multi-tasking OS/processor, I’m assuming that the computer is basically running the kernel and a single process with no means to interrupt short of killing the power or the program halting.

In a typical program to solve that “April Fool’s Integer” question, a precision level will be set, the arithmetic will be done, program will think “Oops, too close to call”, increase the precision level, repeat the entire calculation, still too close to call, and continue for quite a while. I don’t know exactly what you’re looking for – zeroing in on an interesting part of the Mandelbrot set would have an effect similar to my example. Or you could just have the computer attempt to determine, by brute force, what the best opening move in a chess game is!

For computers to overheat when they go compute-bound is not so unusual I think. I used to explicitly lower the clock-rate on my laptop when it went compute-bound; one groggy morning two months ago I forgot … and ended up needing to buy a new laptop!

I recall reading that the first computer to divide by zero blew some vacuum tubes(*)! (I find no easy reference Googling, perhaps it was just a made-up story.)

(* - I can’t think about computers with vacuum tubes anymore without recalling a certain very hot DEC repair-girl. Not knowing how to make conversation I gambitted “I used to repair computers.” Without missing a beat, and even though I was still in my early 40’s at the time, she responded “Oh. The ones with vacuum tubes?” :smack: )

It is true that it’s not that hard to make a computer hot, or freak out. Video games do it all the time (ask me about my overheating GPU problems!). The trick is doing it intentionally, with a simple function that can be described to a layman in a book, and more to the point, a simple function that would be normal in the '80s.

In the modern day, if you needed to overheat a computer to hilarious levels I’d probably just recommend fill-flooding the GPU with vertices and running some pointless vertex and fragment shader engineered to do the most ridiculous matrix math imaginable.

ETA: In short, overheating problems are problems that arise organically from complex systems like “simulate the physics of the Big Bang.” That’s why there’s a market for making computing more energy and heat efficient, it’s obviously nowhere near impossible to make a computer hot. The trick is doing it in a minimal, simple, and elegant way. Which I’m sure is possible, CPU stress test programs do exist after all, I just lack the expertise to know exactly what they use to do it.

80’s era CPUs didn’t have fans.

Most didn’t even have heatsinks.

I sure don’t know anything about programming but when I browsed through the Wiki article linked by Chipacabra (“Halt and Catch Fire Instruction”) I noticed a “See Also” link at the end of the article called Killer Poke and I was curious about it…

Apparently:

Maybe you can use something like this?