As a kid, I played a spaceship game called “Wing Commander”. A few years later, we got a new computer. But the new computer was so fast that the game was now unplayable – although there were no noticeable bugs per se, the other spaceships now moved so fast I could barely react to them anymore. This was a case where faster computers were actually a bad thing.
But this problem doesn’t seem to happen anymore with games in the modern age. Why not? The latest Ipad supposedly has a processor that’s 4x faster than the original ipad (plus the GPU is 10x faster or something) but when I play “Plants Vs Zombies” on my new ipad, the game loads a lot faster with less lag, but during gameplay, the zombies don’t run down the field faster than they did on my original ipad. How do modern game makers make that consistent? Are the games coded to not do more than X actions per second, regardless of the CPU speed? But I can’t imagine how code like that would look. Or is there some kind of emulator mode that deliberately slows down the computer when running old games?
Feel free to use technical jargon to reply – although I’ve never written computer games, I do have an advanced degree in computer science, so no need to dumb down the answer.
Quick answer: modern games actually use timing mechanisms to know how fast to go, rather than just using the speed of the processor. This didn’t happen in older games because adding a timing mechanism itself took a significant amount of processing. Here’s some pseudocode:
Old:
animate one frame
animate next frame
New:
animate one frame
check clock to see if enough time has passed
animate next frame
Old school programs may have used the equivalent of cycle counted loops to control timing - a for loop with a certain index was ‘known’ to take a certain amount of time, for instance. Also, it took a noticeable amount of time just to draw the graphics - there may not have been as much time to ‘waste’ as you’d think, way back when.
On newer machines, the graphics are offloaded (and therefore very fast, relative to wall clock time), and the loops run faster than they used to. If you relied on those mechanisms, you’d have issues.
Instead, we use functions that let us sleep for x milliseconds. Those functions work regardless of the processor speed.
-D/a
That’s the simplest programs, of course. More complicated ones actually increase the frame rate if there’s more time. A higher frame rate means that there is less difference between each frame.
If you have a slow enough computer, you can actually see these methods breaking down. Depending on the coding of the game, this can happen anywhere as high as 30 fps or as low as 5 fps. I personally find that Costume Quest, which barely runs on my old graphics card, starts showing lag at 20 fps. The response time of input is actually noticeably slower.
The simpler timing method I mention breaks down when drawing frame 1 takes so much time that the time for frame 2 has already started.
I think that in the old days they didn’t need to worry about games running too fast because they weren’t dealing with a big range of processor speeds. They were basically writing for one particular model of computer, with only one or two CPU variants. There was no particular expectation that future home computers would be able to run the same native code, like we take for granted now.
That said, I wrote a game myself years and years ago in which I used a timing mechanism to deliberately delay response. The reason was that it was a puzzle game, and certain moves were much easier for the computer to calculate. If the game responded too quickly, the player would know that certain hidden squares were empty. So I put a “minimum thinking time” mechanism into the code.
Not just games, either.
I play with Motorola two-way radios for fun. Early (1980’s) computer programmable radios were configured with some software that can’t run properly on newer, faster PC’s. And newer, faster here means younger than a 386 and faster than about 25MHz. So there are thousands of Motorola radio hobbyists like me hanging onto these old laptops to program our radios.
It is further discussed here: Batlabs
If that were the case games would experience slowdown on computers that are too slow. Whereas in reality a modern game’s game logic runs as fast but with stuttery rendering on slow computers.
Generally this is fixed with what’s called “time based processing”. Which is a fancy way of saying you multiply any the change in anything, position, animation time, buff timer, etc by the amount of time the last frame took to process.
A simple example would be walk speed in a game. Old method:
Each frame move character moveSpeed;
with moveSpeed being in units(pixels, meters whatever) per frame
Time based processing:
Each frame move character moveSpeed*timeElapsed;
with moveSpeed being in units per second and timeElapsed being in seconds.
If memory serves, the Safecracker puzzle game from the late '90s had a clock-based puzzle that became unplayable on faster computers, and the company had to issue a patch.
I remember that one of our old computers had a “Turbo” button on the front. If you turned this off, the computer slowed down enough to make older games playable.
Yeah, that was introduced with the first IBM 8088 clones. The clone makers figured out an easy way to double the original Intel CPU’s clock speed but for backwards compatibility with some programs they let you turn it on & off.
In the early days of home computers, actually same as today, games were the most cutting-edge of software. But back then game writers had nothing but limitations, limited processor speed, memory, graphics, disk space etc. so they learned to make the most of what they had. Thing is, as computer resources became more & more bountiful (GHz CPU speeds, powerful separate graphics processor cards, GB of RAM, TB of disc space etc.) game makers always made sure to use absolutely any & everything they had available to them (it’s called ‘software bloat’). In fact, whereas PC games used to be written by a handful of people (sometimes major releases were essentially written by one person!) nowadays they’re made by huge teams of hundreds of artists & programmers and take years to finish. And since PC power doubles every one to one & a half years they actually would put, what seemed like at the time anyway, too much into a game knowing that by the time the game gets finished the hardware will have caught up.
This software/hardware lag cycle has actually caused some very infamous game debacles like John Romero’s Daikatana and Duke Nukem Forever…
Software bloat isn’t limite to games either. Windows 95 was actually a pretty decent operating system for the time and it would in theory run on 4 MB of RAM.
In the olden days, you could only run one program at a time and programs took up the entirety of the CPU. That meant that if you had a sequence that took 3000 cycles, it would take exactly 1ms every time on a 3Mhz CPU and you could rely on that. Because game consoles came with a known hardware configuration, games were coded exactly to the specs of the hardware.
Nowadays, multiple programs run simultaneously and on vastly different hardware platforms so programs are coded to be independent of the hardware.
getting back to the OP’s question, the one key thing to keep in mind is that on old PCs, the CPU had to do almost everything. If you had a “graphics card,” that card was just a dumb framebuffer (array of RAM) and relied on the CPU to do all of the drawing. The CPU would then send the graphics data to the framebuffer which would then spit it out to the display. so yeah, a lot of games just used the assumption of running on a 4.77 MHz 8086/8088 instead of wasting resources on timing.
IIRC, some games on the first XBOX did this as well, and would run too fast on an overclocked console.
getting onto the side topic, I don’t really get the complaints about software “bloat.” My system has millions of times the power as the one I had 20 years ago. That power is there to be used.
Yup, though if you tie your update cycle to your render cycle (as many games do because it’s difficult to deal with the necessarily shared data between the two) there may be a “check if enough time has passed” step when things like vsynch are enabled.
Yes, precisely, but you can’t use it, since it’s all bloated away. Anything that you used to do in some amount of time, you still do in about the same amount of time, no faster.
This is absolutely, positively, incredibly false. Boot times on modern hardware are minuscule; I have memories of Windows 95 systems taking two or three minutes (at least) to boot. Modern computers, on the other hand, boot in about ten seconds, if that. Modern graphics/floating point processing is blazingly fast and integer processing, while slowing down a bit from the Moore’s Law acceleration of times past, is still better than ever. Disk access is no longer an issue since the ascendancy of SSDs. Browsers are more efficient and things like 1080p video are decoded and displayed in real time, which was simply not possible until a few years ago. And we have these incredibly fast computers that do these things that fit in your pockets now! Your statement that “software bloat” (a very ill-defined concept; as Derleth stated, you may not use Whizzy Feature A, but you probably use Whizzy Features B and C and would be quite displeased if they ceased to exist) has negated all hardware gains is completely absurd, and I’d like to ask you to provide real-world examples of this, because it is precisely contrary to my experiences.
I will say, though, that the one thing that hasn’t really kept up is network down/up speed, thanks to American ISPs lagging behind due to either incompetence or an endemic form of malicious laziness. Google is trying to change that in Kansas City at the moment and are, at the very least, demonstrating how dreadfully behind the times ISPs are.