I recently came across the game .kkrieger, which is a fully 3D first person shooter stored in one executable of 96kB size. Now, it’s not the longest of games, just one rather short level, but it’s got rather impressive graphics (go here for some screenshots – the screenshots are nearly as big as the game, each, by the way), various enemies and guns (I think it’s five of each), several different rooms, plus all the necessary object behaviour, rudimentary AI, and so on.
I know about procedural generation, or at least have a vague notion of it – objects, textures and stuff are created algorithmically, rather than stored and fetched when necessary (right?).
But I still can’t wrap my head around how small this thing is. I mean, even with extensive use of procedural generation, it seems to me like just the code for the enemies’ behaviour, or the weapons, the dynamic lighting, all that stuff, should be more than 96k, and then you’ve still got to store the algorithms to generate the environments…
So, can anyone give me a short rundown on how it’s possible to squash that much content into (or rather, generate out of) 96kB (I just have to say this again)?
And why isn’t this used more widely? I know the upcoming Spore relies heavily on procedural generation for its content, but other than that, I can’t think of many examples that use it to any large extent (well, Diablo’s and similar games’ levels are procedurally generated, I believe, but the objects and textures and stuff are still all stored)… Would it just be too heavy a burden on the computer to have it essentially create all the surroundings from scratch/too difficult to implement on the programming side, making simple storage the better and more economic option? Wouldn’t it be possible to have a texture/object/whatever created by an artist sort of reverse-engineered into an algorithm that produces a reasonably similar thing (I know something like that is done in fractal compression of images, though I haven’t really heard much of that recently, either)?
From the readme file included.
Basically they’re saying what I was expectnig when I downloaded it and what you had said in the beginning. All of the objects are defined with basic how-to-draw lists in the code. ie. panel with texture 100, color ffeecc, reflectivity .86, etc. Then map this panel onto a defined framework.
And according to the readme, all of that is then packed up with a highly optimized compression
That progress bar when you first load up is where it’s building and stashing everything prior to playing. There are no digitized graphics or sound, all the legwork is handled by directx and the GPU on the graphics card. They have a rendering engine kind of like the POVRay application built in to “draw” everything from scratch every time you play. Commercial games don’t do this because of the time it would take to draw everything up and several hours of load time would not be a good selling point.
All in all, some pretty impressive work.
All in all, some truly amazing work.
Too late for an ETA. Sorry for repeating myself. Too many people talking to me at the same time.
Ah yes, I didn’t think about that, load times are bad enough as it is… Thanks for the answer. So there’s really no magic involved, huh?
If someone had shown me that game and proclaimed it was only 96kB in size, I would have thought them full of shit.
Related question: I suppose it’s not feasible to, say, store an image in algorithm form, i.e. have some program compute an algorithm which, when executed, will reproduce the image?
It’s always possible. An image is just a string of ones and zeroes, and any such string x can be output by a computer program whose source is “print x”. I assume you’re really asking whether an image can always be be output by a program which is significantly shorter than the image itself. For any real image, the answer is yes–just save as a .jpeg with low quality.
At the risk of showing my ignorance, is that really the same thing? I’d imagine jpeg compression to essentially work by applying an algorithm to the data that uses the way humans perceive images to ‘toss away’ detail that is usually not perceived – perhaps reducing chromatic resolution, getting rid of colour differences that you wouldn’t notice anyway.
What I imagine in my naivety is something more like analysing an image and, based on that, creating a set of drawing instructions that approximate the original image – or is that actually what’s done in jpegs?
No, you are correct about the basic operation of a jpg. The level of detail can be adjusted down to create a smaller data file, but once the file is saved at that level, the data that was removed is completely gone. If you have an application that let’s you restore the image, it’s really just using algorythms to caclulate and make a best guess to fill the details back in.
I’ve got a long held theory for data compression…
In theory, pi and fractal sets and a few other “magic” numbers are infinite and non-repeating sequences. So by extension, every possible combination of numbers exists somewhere in those sequences.
So, with a powerful enough computer, you could locate the contents of any file somewhere in that infinite sequence and just store the offset as the “compressed” file. To unpack you just jump(calculate) back to that start point and there’s your data.
Extending that thought…
Every application, every picture, every mp3 that has ever been created or will be created already exists in that infinite sequence and is just waiting to be discovered.
Is it too late to copyright pi?
pi is believed to be normal, but not known to be so. Even if it is, it’s not a cure-all for compression: the index of the first occurrence of a given number in the base b expansion of pi may be much larger than the number itself.
Fractals are used for compression already, so you’re a bit behind the curve on that one.
Rather than hijack this thread, I’ve started a new one over in IMHO…
farbrausch are masters of the 64k demo competition. They first burst on the scene with the incredibly detailed and rather long 3D demo called .theprodukkt (demo #fr-008, their 8th).
They are able to pack so much into a tiny package for several reasons:
-
3D vector graphics. Objects in the demos (and all realtime 3D software) are simply described by a series of coordinates – x, y and z. That takes up very little room. To give you an oversimplified example, let’s say I want to display a square. I can either save the entire graphic of a square, which even with JPEG compression could take up 2-3k, or I can tell the computer how to draw the square by giving it the coordinates of each of the four corners and then tell it to flood fill inside the square. Obviously, the latter method is going to take up only a few dozen or so bytes versus thousands of bytes for a full stored graphic.
-
Procedurally generated graphics and textures. These are backgrounds and texture maps to place on the 3D objects that are generated mathematically. They are not stored as graphics, but rather are generated by the program when it is run.
-
Procedurally generated audio. Farbrausch member fb created his own software synthesizer that can generate sound effects from mathematical parameters. This is nothing new, actually; synthesizers have been doing this forever. But creating their own software-based synthesizer meant they could simply keep a musical score stored in the software and generate the instrumentation mathematically at run time.
-
Runtime compression. Classically called “packers” from the days of yore, these are executable files that are themselves compressed and simply decompress automatically in memory when they are executed.
The drawback to all of this is that it’s difficult to create truly realistic textures – a lot of them tend to look a little abstract.
All of this adds up to the ability to keep the files absurdly small while seeming to provide hundreds of mebabytes worth of graphical data. As a matter of fact, they do take up that much – once they’re all generated in memory after you load the program. But since no (or very little) graphical data is actually stored in the program itself, the file size stays minuscule.
Check out their graphic demos and such, they’re quite impressive. I personally reccommend fr-008 (.theprodukkt), fr-022 (ein.schlag), fr-025 (The Popular Demo - not a 64k demo but impressive nonetheless), and fr-044 (Patient Zero - get your red/blue 3D glasses out for this one).
Could this technique have been done in the 80s to improve Nintendo games?
I don’t think there was much processing power in those consoles.
Heh, definitely not. The 6502 is a solid little trooper, and dear to my heart, but I don’t think players would have been willing to wait the two or three weeks it would take to generate all that content.
Not really. The machines of those days didn’t have near the processing power to be able to create procedural textures or audio. Some games did use runtime compression (“packing”) in order to squeeze more data on the cartridge or disk, though that didn’t really come into vogue ‘til the 16-bit era due to the amount of time it took to decompress; those ol’ 8-bitters, clocking in at somewhere between 1-3MHz, just didn’t have the horsepower to do it in a reasonable amount of time. Plus, they didn’t have the RAM to do it effectively. (Remember, decompression has to take place in however much RAM is left minus the amount of space the compressed file(s) occupy.)
farbrausch’s brand of procedural generation is really only something machines of the 32-bit era were geared towards. They require a metric assload of processing power (relative to consoles of the 80s and 90s) just to generate the stuff, never mind pull it all off. I recall on the old Atari ST a program that could display JPEG files in 32,000 colours – a big deal at the time (early-mid 90s) – and it took upwards of 45-60 seconds to decompress and render the colours, just to give you some kind of idea of the amount of time it took to do something we now take for granted as being instantaneous. And that was on an 8MHz 16-bit 68000. Just imagine what the NES would have to do to pull off something similar.
Thanks, for the info on Farbrausch, Mindfield, I saw “the product” when it first came out and was totally amazed. I was going to attempt to describe it now, 8 years later with what little I remembered of it, which would have been completely useless.
Jeez, 64kB, 11-minute long 3D video, it’s even more amazing that I remember.
Indeed it is. I still hold it up as an example of the kind of technological ingenuity you can come up with when you’re given restraints and want to make the most of them. it’s one of the reasons I was never really impressed by the PC demo scene – a lot of it was cool but didn’t push or even shatter the envelope like the demos of yore did on much more limited, black-boxed systems. At least until Farbrausch turned up and showed me what they could do with a file no bigger than the amount of memory the average 8-bit computer had.
It’s still freakin’ impressive, even knowing how it’s done. That is magic.
It is. Thanks for the explanation and link, I’ll look around some!