Need math problem to crash 80's CPUs

This is why many of my anecdotes would be useless here.

Many of my interesting stories from the 1970’s involve computers that not only had fans, but were water-cooled. :stuck_out_tongue:

Yeah, okay, I’m not old enough to have thorough experience. I mostly just assumed they did since the few vintage computers I’ve worked with were big supercomputing monstrosities that did have cooling.

ISTR a program for the commodore 64 that would make the external floppy-drives crawl ever so slowly in one direction, eventually falling off the table! Sounds a bit like what you are looking for?

There’s a popular program called Prime95 that calculates mersenne primes. I’d let it run at least 48 hours to test a new pc. Every customer pc I built in the 90’s ran the test 48 hours. It stress the power supply, memory and cpu.

Most Gateways, Dells, and other commercial bought pc’s survived 48 hours. A few failed and crashed after 96 hours. Often heat related or the memory had a bad chip.

The newest prime95 is free and supports multi cores. Basically runs the CPU at 100% non stop.

You want to run the torture test after starting prime95.

Sorry I can’t read all of the posts right now (need to sleep) but let me clear up a few misconceptions:

  1. The story takes place in the modern day, but uses an 80’s era CPU, such as a 68000 or 6502. These use passive cooling. Secondly, the CPU would not be just on a specific computer motherboard, more like a home built computer hanging on a wall, whatever is barbaric enough to solve the problem I made this thread to look for, very no frills.

  2. I am not looking to physically damage the CPU, in fact that won’t work at all, but something plausible to cause it to run out of memory or lock up. Maybe the display will reset the power to the CPU every X minutes after the CPU crashes in X time thinking about the problem.

  3. I’m no programming expert but I have a C64 and can make an infinite loop in a single line in BASIC, no an infinite loop will not cause any damage or overheat the CPU. Maybe it is impossible to intentionally crash the CPU by giving it too difficult a problem, so some way to intentionally get a memory error might be the best solution to my problem.

  4. Illegal or undocumented instructions while maybe able to hang up the system, will not work because there really isn’t the metaphor I am trying to draw there.

For example, the C64 has 38k of useable memory when it loads a BASIC program, so if it calculates digits of Pi to X decimal points it will predictably run out of memory at (some specified digit). But, calculating pi is a bit cliche of a trope, so a somewhat more trippy infinite problem you guys could recommend might be on the right track. Mendelbrot sets sound more like what I am thinking of.
Thanks for the help so far, I had no idea this would be so difficult!

Hell, if all you want to do is make it crash and not explode, just do this:


unsigned int factorial(unsigned int n){
 if(n == 0 || n==1){
  return n;
 } else {
   return n * factorial(n-1);
 }
}

Such an old computer will probably crap out around a fairly small number, even modern computers will generally become sad somewhere in the hundreds range.

Forgot to mention, you’d need a very early 1990’s version of Prime 95. Something I have somewhere on floppy disks.

Never had it damage a pc. At most they over heat and crash. They are fine after cooling off.

Er… that first return statement should be “return 1”. I can’t believe I just messed up a factorial program.

ETA: On the supercomputer server at my university, it craps out (meaning actually segfaults, not “fails to get the correct value”) at about 262027!, I imagine a hacked together home computer with '80s tech is going to be significantly lower.

So an Apple for instance. :smiley:

Back then personal level machines and especially with those two CPU architectures, there was no such thing as protected memory. A program that does nothing more than overwrite a slab of the OS memory will kill it pretty quickly. Overwrite the interrupt control vector table and it will die almost instantly.

By 80’s era computers I’m assuming you are talking about one of the 8 bit microcomputers. At the beginning of the 80’s these were very small and simple machines like the Commodore Pet and the Apple II. This was the era of the “war of the 8 bit machines”. You have a lot of different machines all fighting for market share. Atari had the Atari 400 and Atari 800. Texas Instruments had the Ti99. Commodore started with the Pet and really took off with the Vic 20 and the later Commodore 64.

At the end of the 80s you had early IBM PCs (which were geared for business, but were used in the home, with HORRIBLE graphics and sound) and the first Macs.

All of the 8 bit computers were somewhat similar in design. They had a very basic CPU like the z80 or 6502 and not a whole lot of memory. As was already mentioned, the CPU did not have a fan and usually didn’t have a heatsink. Even if you put the CPU into a tight loop and made it work as hard as it could, you could leave your finger on the chip and not get burned. CPUs back then weren’t multitasking and didn’t halt when not executing a thread like a modern CPU, so they pretty much ran at full bore all the time. They weren’t “stressed” by complex programming any more than they were stressed when they appeared to be sitting there doing nothing. Either way, they were constantly churning through instructions, even if they instructions were just a loop to wait for input from the keyboard.

They were so slow and had so little memory that you had to be very careful doing anything complex with them. Programming anything complex usually meant doing it in assembly language just because the higher level languages for those computers was very limited and slow. This meant that programming the computer was a very slow and time consuming process compared to modern programming. You could probably whip up a simple flash game these days in an afternoon. The equivalent game on a computer of that era would have taken about a month to program. You didn’t have standard libraries and you didn’t even really have much of an operating system to interface to. You had to program every little bit yourself including the low level video access routines.

Solving some high level math problem would be very, very, very slow. The 8 bit CPUs were very brain dead. You can’t add the numbers 2,153 and 7.212 together. They have to be broken up into two bytes then each byte added and the program has to handle the carry and all of that. And forget about multiplying and dividing. You want to do that, you have to break it down into loops of adds and shifts or subtracts and shifts (look up Booth’s algorithm if you want the details). You want to just multiply two 16 bit numbers together? Oh yeah, that’s going to take a while. All math was integer math, too. You want floating point, you have to emulate it. Good luck. You want to crunch some numbers? Ok, fine. Come back in a few hours. It might be done. As was already mentioned, you can’t “overload” the machine. If you try to do too much math, it just takes a really long time, possibly too long to be practical (as in days, seriously). There’s no memory management. If your program goes into a loop and runs out of memory, chances are you’ll just end up doing something weird and after running for several hours your computer will just lock up with no error code and no way for you to really know that it was your bug that ran it out of memory and caused it to lock. Debugging things like that back then was sometimes painful.

Computers then didn’t really crash in the modern sense either. They didn’t have enough of an operating system to do a blue screen of death. They were single tasking, You couldn’t have a program crash without taking the entire OS down with it (there wasn’t really much of an OS there to start with, just a really basic command interpreter). A crash on an 8 bit computer was the thing would usually just lock up on you. No error codes. No nothing. It just stops. At that point there’s nothing you can do but hit the reset button and start over.

At the end of the 80’s you started to get “real” operating systems with some basic resource management and the ability to multitask (as long as programs behaved themselves). Then you started to get modern style crashes, though even then a lot of times a single program crash would take the entire OS with it.

If you have access to an 80s style mainframe, things can be much more interesting. An 80s style Vax running VMS can multitask and can have things like a program attempting to restart if if fails. You still can’t overload it until it breaks, though. The worst you can do is completely fill the disk and have it halt.

Can you give any more specifics about how this fits into the plot of your story and what it needs to accomplish?

The old 8088’s and 8086 (orig pc) were easy to crash. I wrote assembly as a hobby and sometimes I’d write over the wrong memory area. Instant crash. These cpu’s were totally unprotected from bad programming.

iirc the 286 introduced some protection. You could still (almost) do what you wanted in assembly. Write all over DOS’ memory space and it crashed. I recall screwing up and writing to video ram. left some really messed up, blinking screens. All part of my education. :wink:

the 386 had even more cpu protection. I stopped writing assembly for fun. The 386 made it too hard.

I have to ask, I assume that students doing programming back in the day frequently would accidentally fuck up the kernel memory eventually (what with there being no memory protection and all). How did computer departments deal with this? Was it relatively easy to just feed in some punch cards/read a tape and have the system back up and running in a few minutes, or did it require a grad student sitting there and flipping switches for an hour to reconfigure the OS every time a student accidentally ran his way headfirst into kernel memory?

Just read your last post (missed it the first time when I skimmed down the thread, sorry).

How about some sort of data encryption / decryption thing? You could run a Commodore 64 out of memory pretty quickly doing that.

It’s been a LONG time since I programmed a commodore. I think if you run it out of memory in BASIC it will give you an out of memory error. For anything complex though I used to switch right to assembly code (it was faster and easier, IMHO) and there’s no real system resource management in assembly. If you ran out of memory your program would usually just hang or you’d start overwriting and doing weird things, depending on how you wrote your code (pointer wrap-around, for example).

Mainframe operating system did a good job protecting itself. At most, your timesharing account froze. Shut off the terminal. Ask someone with priv’s to kill the process as a last resort.

PC’s running DOS only needed a hard reset by pulling the plug. At most, you might need a fresh boot floppy if the old one got corrupted. Early pc’s usually didn’t have hard drives.

As engineer comp geek said, the simplicity of those CPUs is a difficulty.

Modern CPUs and GPUs can do so many things at once that it’s often impossible to prove that there isn’t some task which might overtax the cooling or power solutions. In my industry, we call programs that do this “power viruses”, even though they usually aren’t intentional. Generally these involve achieving perfect utilization of each of the subsystems: math, memory, cache, etc. There’s no reason a power virus even has to do anything interesting; indeed, that would impose an artificial and unneeded restriction if you were trying to design such a “virus”.

So to again reiterate what ECG said, the problem is easier if you have access to more sophisticated machines like mainframes. These machines might be more susceptible to power virus like techniques.

… was a minicomputer, not a mainframe. A PDP-10 was a mainframe. A System/370 was a mainframe. Getting the little things right adds realism to a story.

As was mentioned above, you might have more luck thinking of interesting things to do with peripheral hardware. It was indeed possible to get some kinds of hard drive to walk across the room, much like an unbalanced washing machine might walk, if you had the right (or wrong) access patterns fed to it by software. The most dramatic versions of this are limited to old (1960s-era, 1970s-era) mainframes with disk drives the size of washing machines, which is a bit far afield from what you were asking about.

In the early 80s I moved from a 1k ZX81 to a UK101 single board computer with 4k RAM and Microsoft Basic, and an extension ROM including an assembler. One of my favourite BASIC programs did arbitrary precision integer math. I would use this to find the largest number I could generate a factorial for (x!). It would also show each step along the way (1!, 2!, 3! … x!). It could take hours to finish, and often never did.

The UK101 also had a warm-boot (reboot the CPU without resetting memory) - this could easily allow a crash-restart cycle that reran the same application (by virtue of changing the CPU start address in memory).

In 1985 I started a Computer Science degree. One of my courses used Z8000 VME boards hooked up to a host (VMS, probably). We had to write a binary multiply routine in assembler - rotate and add. The trick with this is knowing when to stop (i.e. after 16 or 32 bits). Many of my classmates failed at this, and because the VME lab was locked (no access to the reset buttons), we were rapidly running out of VME systems that were not stuck in an endless loop :smack:
Fortunately, I crashed into a locked-up system via the terminal port, validated and tested my code first time, and sold my access session off to the highest bidder :smiley:

On a slightly later note (1988/89 or so), my (to be) father-in-law was a lecturer in Computer Science when I was at University (and dating his daughter). He was developing the standard library for Modula-2, and decided to play with some recursive routines to test the compiler. He wrote a routine that never terminated (bad news in general) expecting a stack fault once it reached 64k (80286 segmented memory model). In fact, it never faulted, and a bit of debugging showed that the stack segment address just wrapped at 64k. We played with alignment and local variable allocations (and debugging statements) to get some weird effects from data offsets as the stack wrapped. That is where I learned about stack/buffer overflow hacking. Good times.

Si

Back then teaching wasn’t done on PCs. From about 1980 onwards the the VAXen had a mortgage on the market for teaching. These beasts ran a proper OS, and provided proper protected (and demand paged) virtual memory. You could run a class of a couple of dozen students on serial terminals off a single VAX 11/780 (which was the first model made.) Considering how staggeringly small and slow (1 MIPS, 4 MB memory, about 100MB disks) these were in comparison to modern machines one really has to wonder what the heck went wrong.

Berkeley Unix was ported to the VAX soon after it came out, and many CS departments taught their courses with Unix, whilst some stayed with VMS on the machines. There are remnants of the VAX architecture that still linger in modern operating systems, via both the Unix and VMS strands.

Prior to the VAX, some wealthy schools had machines like the DEC-10, and many ran the undergraduate courses on punch cards. Machines like the CDC Cyber series machines were popular. Pascal was first implemented on one, and that became part of its drive to popularity as a programming language.

Well, I knew they weren’t on PC’s. Hell, even at my university nowadays everything is just a login terminal for an Ubuntu server. I know I’ve just heard storied from professors about waiting in line at 2AM to test your program on the ONE COMPUTER, and then getting rather stabby because the guy ahead of you had an infinite loop and somebody had to go and physically reset the machine. I didn’t know that they had memory protection on the school mainframes though, good to know.

They probably didn’t need to physically reset it - at least not by the 70’s. However a primitive job scheduler might assume that all jobs ran to completion before the next one ran.

There were no OSes that really were all that friendly towards teaching until VMS and Unix. (Maybe TOPS-20, I never used that.) Many came from commercial batch processing backgrounds. The concept of interactive use was pretty novel in the mainstream. Job control on the CDC machines included some level of operator intervention of things went awry. Jobs too long enough that you could visually see them propagate up the queue. The operators console would often be configured with the job queue permanently displayed. The students a few year ahead of me had their own group of hackers who would engage in (mostly good natured) running battles with the operators.

Indeed these batch processing systems really didn’t have what we would recognise as a modern operating system at all. Some interactive features were grafted onto the basics, and never worked especially well. Such hacks as running each command line as a separate batch job were used. Interesting to contrast with the goings on at places like MIT and Xerox PARC. Xerox had what was a fully functioning personal graphics workstation by the early 70’s.