I’m working on a project that requires me to implement rudimentary 3D graphics directly in hardware (on an FPGA). That means, at the level of manually wiggling the voltage on the VGA pins in order to display a picture. As such, I can’t rely on any libraries like OpenGL, DirectX, or whatever to do the drawing for me - I have to do everything myself.
The only problem is, I know nothing about 3D graphics, that is, how to go about projecting coordinates in 3D system onto a 2D plane, or about what algorithms are used to rasterize images. I assume it involves some sort of matrix operation, but that’s about the sum of my knowledge.
I don’t need anything but the most basic functionality: essentially, I’ll be displaying a set of points in 3D connected by lines, and allowing the user to zoom, rotate and translate the image. I don’t need textures, shading, or any of that stuff.
What books are considered the standards in this field? Does anyone have any personal recommendations?
Foley, etc. is a good reference, but maybe overkill (plus, I don’t know the last time it was updated).
You might try Eric Lenyel’s “Mathematics for 3D Game Programming and Computer Graphics, Second Edition” (ISBN: 1584502770) if you’re more mathematically inclined. It’s a math book, not a programming book, although at the level you’re talking about the difference is negligible.
The OpenGl blue book actually has quite a decent section on the rendering pipeline and how one would reimplement it if one desired. Modern 3D hardware doesn’t actually follow the GL pipeline all that accurately for performance reasons but it’s a nice model to base your implementation on. It depends on what you want to do. Simple, solid colour shading with 3d geometry transforms is not especially difficult. Simple lighting is also quite easy to do, texturing is a bit trickier. But if you don’t care about performance, you can actually build quite a full featured 3d engine fairly simply.
Another alternative might be to look at pre-GL software rendering engines like the Quake Engine to get an understanding of how it was done there.
If you really want to work on the basics, that is a software 3d renderer, you can try Tricks of the 3d Games Programming Gurus. It uses Direct3D as framework, but all rendering of polygons, lightning and etc. are done via code.
If all you need is wireframes (i.e., no surfaces), it’s going to be considerably easier than what everyone is telling you here. First, figure out where all the points are in your object, in Cartesian coordinates. To rotate your object, represent each point’s position as a vector, and multiply those vectors by a rotation matrix (you might have to write your own matrix multiplication routines, but for only 3 dimensions, it’s not too hard). Now, you want to render all of your objects. This is just a matter of setting up a spherical coordinate system, centered on where your user’s eyes would be. Convert all your Cartesian coordinates for your points into these spherical coordinates. Ignore r, and theta and phi will convert into x and y values on your screen. Now that you have the x and y values for everything on the screen, you connect the points that need to be connected with lines on the screen. Presto, wireframe 3D.
Do not transform your coordinate system into spherical coordinates. A straight line in Cartesian space is a curved line in a spherical coordinate system. Instead, use the 4x4 matrix perspective transformation that is given in any of the references listed so far.
I’ll have to double-check exactly what it was that I used in my 3D programming experiments, but whatever it was, it worked. I might have had to introduce a bit more complexity than the OP needs, since I did mine in red-blue stereo.