A few years ago I was playing around in QBasic and it occured to me, how would I go about drawing a 3D cube on the screen with only an X-Axis and Y-Axis?
What calculations and formulae are used to create a cube on the screen using nothing but X and Y co-ordinates?
Assuming that I understand your question properly, and with the caveat that I am not a graphic artist/game designer, just someone with a geeky interest in video cards, I will attempt a quick answer.
They don’t.
3D graphics are made up of X, Y, and Z coordinates, just like they would in in real space. To create a 3D object, polygons are defined, and then a textures is stretched over them.
As Far As I Know, Qbasic lacks all but the most rudimentary graphics functions, and this certainly precludes any 3D. The closest you could get would be a 2D representation of a cube, the same way you can draw something that looks like a cube on a piece of paper using a pen.
If I totally misunderstood your question, post a reply, yell at me, clarify it, and I should get back to you.
I understand that 3D objects all have an X,Y and Z axis but before 3D Accelerator cards were available how were 3D graphics drawn?
Many years ago I had a friend with an old XT computer (amber monitor and all) and he showed me a program which draw true 3D polygons that rotated (they were all wireframe)…
Obviously the programmer didn’t just do it by giving the illusion of 3D but instead uses formulaes to convert what the Z vertices would look like on the X,Y plane… (For nitpickers: I’m aware that technically 3D graphics are still an illusion)
3-D computer models are data representations of actual 3-D objects; that is, everything has an X,Y and Z axis.
Triginometric functions are used to convert the Cartesian coordinates (Xdistance-Ydistance-Zdistance) to polar coordinates (angle1-2angle-distance), and then you can “rotate” a point about an axis, by changing the angle measurement of that axis (you can “use” any two axis’ for the angle bases, so you can always rotate in any axis you want). Then since you have to display the data in a flat (Cartesian) screen coordinate system, you have to convert back to Cartesian coordinates again. The display program has a numerical factor (a variable) for “increasing” the measurements of near objects and “decreasing” the measurements of far objects. This is where the third dimension “goes”, and this gives you a set of points that you draw lines between, and you’ve got your wireframe whatever-it-is.
If you want to do solid modeling (showing 3-D bjects with solid-colored shaded triangles), you have to designate a point of light source, and then you figure out the angle each primitive on-screen is facing. Primitives facing towards the light source get colored lighter and those facing away get colored darker. You can then shade the primitives accordingly.
The reason only the front primitives get drawn is simple: all the primitives are initially created with teh three points ordered running around in one direction (clockwise or counterclockwise), and you have a routine to test which direction the rotated primitive is running. Primitives running backwards are facing away, and so you can skip drawing them. This is done every time the model is rotated and re-displayed.
If you want to use bitmaps (like any normal modern video game does), there’s a few more things you have to do: bitmaps (which are triangular) are mapped onto primitives which are also triangular. You have to have a system for figuring out which corner or edge of the primitive is highest on the screen, so the bitmap can be drawn oriented properly. You also may use a table called a Z-buffer, which is used to record the cirtual distance of every visible point on the screen, for collision testing purposes.
A primitive is a trio of points in virtual space. The reason you have to use triangles is that to make any other shape, you have to “build” it from triangles anyway. That is, you can write your own routine to make squares or octagons, but you have to plot each point one at a time, and after the first two points all you are plotting is triangles anyway.
~
There’s lots of optimizations I am leaving out, but that’s about it. It is possible to do anything yourself from scratch that you have ever seen anywhere else, but the problem is that commercial/industrial packages are optimized tremendously to run faster. Your QBasic 3-D program will run (in the screen resolutions it is capable of) but it will run sssslllllooowwwwwwwww. C++ is usually the prefered language.
~
Video cards that support DirectX or OpenGL have a microchip that is dedicated to performing some of the mathematical operations of generating 3-D rotations and image filling. It is “hard-wired” for this, and doesn’t do anything else. A dedicated microchip can “run” faster than your regular CPU can follow instructions; that’s why console video games look so good even though they run far slower than an average PC. In videogames, most of the processing concerns generating the 3-D image you look at.
~
I dunno squat about OpenGL. I have played with DirectX programming, though. To do this, you need the DirectX developer libraries and a C++ compiler; general consensus is that MSVC++6 or .Net is the best. You write your models and program logic, and include a bunch of DirectX libraries and call DirectX functions to handle graphics display stuff. Sounds easier, and sometimes it even is, but usually not -you often get cryptic error messages at every level if you don’t do everything exactly right, and finding your source errors within the DirectX tangle can be challenging. The DirectX libraries CD you can buy from MS; the Ver.7 I got cost $15. - MC
I considered some time ago writing a book named “3-D from Scratch” because when I looked, I never found any boks that realy showed how to do all the 3-D stuff yourself. I ended up learning what I did from reading many books (most out of date, when you HAD to program some of these routines yourself) and lots of online material.
Any language with graphics capabilities can do 3-D imaging, but obviously some are better than others. Me thinks the best standard display for QBasic is what? 320x 240? If you write a 3-D display program, you will be able to see that it works, but it won’t be real impressive. - MC
Just a nitpick, but what you are describing is simply a few generations down from the 3-D wirefram cube. The QBasic 3-D cube is just a “real” as your 3D-accelerated computer-animated Lara Croft. Lara is several mathemetical steps up the ladder than our lowly cube, but the same basic concept applies.
Anyhow, Mick, you are in luck. here is a tutorial on 3-d in Qbasic.
MC is on the right approach, there’s a whole set of trig functions known as projective geometry. The basic Cartesian coordinates (x,y,z) are held in arrays, and matrix multiplication is used to transform the arrays.
There are also other nonstandard approaches to 3D, such as voxels, which are units of volume in xyz coordinates, rather than pixels or points on surfaces.
This does not appear to be currently maintained, but if you click on the section for 3-D documentation you should get enough reading material to keep you busy for a bit.
QBasic is horribly slow for this sort of thing, although for a simple cube it will suffice. Having written a 3-D engine myself, I now have a truly great appreciation for the guys who can make theirs run really, really fast.
I was not aware of that book, but it doesn’t quite seem to be what I had in mind. What I was originally looking for was a book that you could copy code out of, but that would do everything manually so that you could see and understand how it functioned. Maybe show two complete separate examples, one in Qbasic and one in C/C++, that optimization methods shown towards the end of the book can be applied to.
~
A couple of the best books I ever found were game programming books from the WinG era… when it came to trying to understand how some process was done, these were the only books that really showed it all. - MC