Computed Axial Lithography

This is a new 3D printing method that uses something like Computer Tomography to produce 3D objects in a transparent container of resin. The container rotates and a single light source (a DLP projector) solidifies the resin as it rotates, conceptually forming a 3D object all at once instead of incrementally as other 3D printing methods do.

Here is half decent YouTube video about it. Here is the wiki article, and it links to several other articles which say pretty much the same thing as in the video.

The video explains that this grew out of experimental use of multiple light sources to solidify a 3D shape inside a container of resin. In that mode the resin would be solidified where 3 sources of light intersected with enough intensity to solidify the resion. It’s easy to see how resin would solidify from the intensity of multiple intersecting light sources. But in the new process there is only one light source and the container rotates.

So how does one light source solidify the resin? The video starts to talk about this around 2:20 but I’m not picking up how a bit of resin in the middle of a container solidifies. If it’s just the high level of intensity of the light the resin should solidify just inside the container wall when the light hits it. Is there some very specific light intensity that will solidify the resin and a higher intensity light has to pass through some of the resin which absorbs some of the light before it reaches the specific point of intensity where the resin solidifies?

I hope I’m asking the question right, basically in this scheme I don’t understand how a single light source can solidify resin in the middle of a container. The video and the linked articles all seem to gloss over this, or else I’m just not getting the explanation.

There might be some sort of “memory” effect, where a given spot solidifies if it’s hit by the light source continuously over some span of time. As the container rotates, only one spot will get that duration of illumination.

Just what I came in to post. I’m not clear how it could do certain concavities that way, but it makes some sense.

Me either. I’ve watched the explanation 10 times.

I think it’s concavities on the vertical axis that are a problem but maybe those can be done with the incremental approach somehow.

And I suppose the resin is still transparent after solidifying, but so far I think to make a drinking glass you would need to build up the side walls something like a reverse lathe, but maybe it could be done in a single rotation. You would try to form the side walls with light aimed at just the side of the glass and not the center.

Think of it this way:

Mentally break the mass of polymer up into a large number of very small volumes arranged in a 3D array. Call each of these volumes a voxel.

Assume that a voxel of resin will solidify is it exposed to a sufficient number of photons of light. The number of photons incident on the voxel will be a function of the intensity and duration of the light.

Consider first a single beam of light one voxel in diameter. Shine that beam of light through the center of rotation of the mass of resin. Rotate the mass of resin. The voxel exactly at the center is exposed to the beam of light continuously; voxels off-center are only exposed part of the time. With the correct combination of light intensity, rotation speed, and exposure time you can cause the center voxel to solidify while the remaining voxels remain liquid.

(Aside: one point made in the presentation is that the resign has a sharp threshold - the transition between liquid and solid is very sharp so you don’t get mushy voxels.)

Now change the single beam of light into a bunch of parallel beams, each one voxel in diameter and spaced at a one voxel pitch (ETA: in a straight line perpendicular to the axis of revolution). Consider a voxel that is off-center in the mass of resin. Rotate the mass of resin, but control the beams of light so that only the beam which passes through the voxel of interest is on, all the others are off. As the mass rotates, different light beams turn on for different periods of time. The voxel of interest is again the only one which is exposed to light the whole time, so it is the only one which solidifies.

Now say we have another voxel somewhere else in the resin mass. We can calculate a light beam pattern that will cause only that voxel to solidify. this pattern is different from the first voxel.

Here’s the magic: If we merge the two light beam patterns so that they are “playing” at the same time we can cause both voxels to be solidified in the same exposure!

By repeating the calculations for each voxel that you want solidified and merging the resulting light beam pattern you can solidify arbitrary voxels within the resin.

Note that I have described only one “slice” of the mass of resin. Repeat the calculations for more slices in the z-axis (along the axis of rotation) and replace the line of parallel light beams into a 2D array of light beams to expose a 3D object.

Obviously this is a very simplistic explanation, it the real system the light comes from a projector so the “beams” are not parallel. Plus I do not consider the attenuation of the light beam through the resin.

However, in many ways this technique is the reverse of a CT scanner. There is a field of mathematics dealing with something called the Radon Transform which describes the projection of a 3D object into a series of 2D images. In a CT scan you actually measure the 2D images and calculate the 3D object through a inverse Radon Transform, so I assume this printer is using a forward Radon Transform to determine which 2D images to “play” into the resin while it is rotating to get the desired 3D object.

And I suspect that just like there are certain structures that are difficult to image using a CT scanner, there will be certain structures which are difficult to make with this printer.

Thanks. That confirms what I was thinking, it’s the combination of duration and intensity that solidifies resin. The concavities present interesting problems but their surfaces being actually 3D will be non-concave relative to the light source at some point in the rotation. The transforms allow as much as possible of the 3D object to be solidified in just one rotation, saving time just as a CT scanner minimizes the amount of radiation needed to capture a 3D definition. I’m not sure why the complex transforms are necessary but I think it must relate to the type of 3D definition used.

Objects produced with this resin are soft and flexible. It’s not that great a game changer until harder and stronger materials can be used.

The drivers for the printer would have to be doing a forward Radon transform, which is easy and well-defined. But the resin itself would effectively have to be doing the inverse Radon transform, which is not well-defined. And it’s going to be doing a very simplistic version of it, without any of the heuristics which are generally included in software implementations. I’d expect that this technology would work just fine for simple shapes like cubes and spheres, but then, if you’re making cubes and spheres, 3d printing is the wrong choice of technology. For anything complex enough to be worth 3d printing, I’d expect this method to get the details wrong.

Not parsing what you mean by “resin itself would effectively have to be doing the inverse Radon transform”, unless that’s how you’re describing the dimensions of the solid object produced.

I’m not real clear on why Radon transforms are needed unless the assumptions is that the 3D object is defined by the results of a CT scan, real or virtual. If the object was solids modeled or defined by polygons and surface mapping I think there would be other methods of controlling the printer that are simpler. Not that it matters though with modern processors though, performance won’t be an issue here, and my lack of ability to deal with complex math isn’t going to restrain others.

Right, it’s a cool idea but for the materials available and the potential problems it doesn’t look all that great.

OTOH the 3 light source method would seem to be fine, not really more incremental than the 1 light source+rotation method, and much easier to make any possible shape even if incremental means are needed. The rotation model might add a little smoothness on the horizontal plane.

The Radon transform (and inverse) is the math that implements the conversion between the 3D solid model and the 2D projections which would be projected into the rotating resin. The inverse Radon transform converts those 2D projections back into the solid object. So when Chronos says the resin is performing the inverse radon transform, he means that the resin is converting the 2D projections into a 3D object through a physical process that is described by the inverse Radon transform.

The process I described is actually very similar to the backprojection process which is used to implement the inverse Radon transform to generate 2D cross-sectional or 3D images from the measured data in a CT imaging system. In that case the xray/body/detector array is doing the forward Radon transform and the computer is doing the inverse.

So this printing technique is going to be limited by the same limitations of CT imaging: reconstruction artifacts, structures shading other structures, finite spatial bandwidth, etc. Add to this the fact that the resin is essentially doing a 1-bit truncation on the inverse transform output (either solidified or not solidified) and it is pretty clear that this technique might be good for certain classes of structures but certainly not for generalized 3D printing.

My guess is that the example they used is pretty representative of what can be done well - a solid structure without interior details (which would be subject to artifacts and shading) and without sharp detail on the exterior (which would be limited by the finite spatial bandwidth). Actually your example of a cube would likely not turn out well due the the high spatial frequency of the edges.

Right, that’s the only thing that made sense, it’s solidifying resin into the shape based on 2D definitions of the object.

And maybe that’s what they did, but I think that 2D definition is assumed to be the result of Radon transformation from an original 3D object. The 3D object could have a simpler definition, CAD systems produce 3D definitions with solids modeling and polygonal meshes. I think there are simple ways to produce the light image needed to form the object if you start from those simple definitions.

This would describe most 3D printed objects. And with this soft material it would be mostly useful to make molds which can’t have complex concavities or interior spaces. You probably render the more complex parts in split portions to make split molds with. This isn’t taking over the 3D printing world without a lot of improvements.

You could use a ray-tracing sort of algorithm to generate the projection images, but you are then using numerical techniques to calculate the forward Radon transform. This is in fact how the inverse transform is actually done in CT imaging (the “backprojection” technique I mentioned in an earlier post). Any method you use to generate the projections will be a numerical approximation of the Radon transform.

Any 3d printer must start with some specification of the object, usually based on polygons (such as an STL file). But then it must do some sort of processing to convert that into whatever format the printer actually uses, which will depend on the kind of printer. For an FDM printer (the cheap kind that are ubiquitous nowadays), that means constructing the path for the head to follow, and the extrusion rate, and so on. For this printer, it would mean constructing the two-dimensional patterns of light that it’s aiming at the tank as it rotates. That process of constructing the 2-d patterns would be a Radon transform, which is fairly easy and straightforward to do.

The computer hooked up to a CT scanner, however, or the resin responding to this light, has to do the inverse Radon transform, and that is not straightforward, because it’s not well-defined. That is to say, for any given set of 2d images, there are multiple 3d objects that would produce that set of images, and you don’t know which of those 3d objects is the correct one. A well-programmed computer interpreting a CT scan will have a set of general guidelines and assumptions that will help it to guess what the correct 3d object is. The tank of resin won’t, and so will (presumably) take on the simplest form that works, which probably won’t be the correct one.

I can understand that as an issue if you were trying to take a 2D data from a CT scan and then trying to render a 3D object in 2D for human viewing. I’m not understanding how this can apply to a tank of resin where you are producing a 3D object.

Are you saying replaying the same video on that machine could produce different 3D objects in the resin?

I’m not seeing how that can happen.

ETA: Radon transforms aren’t as complicated as I thought. It’s what I would do in code instead of using all those funny letters.

The general problem of ‘what 3-d shape created these 2-d projections’ may have multiple solutions, but the problem of ‘what 3-d object will result from this pattern of light projected on the resin’ is the result of a determinstic physical process so clearly has a unique solution. The computer should be able to accurately predict what 3-d shape a particular 2-d light pattern will make. The computer may not be able to calculate a set of light patterns that produce a particular given 3-d object (say, nested spheres), but you shouldn’t get something ‘wrong’ in the sense of different from the computer’s expectation.

I think the mathematical answer to the contrast between the general reverse Radon transform and the Lithography machine is that, given a set of 2-d images, the 3-d object is unique IF the object is assumed to be an object capable of being produced by a lithography machine (e.g. it cant’ be nested spheres)