On one episode of CSI: NY, Sheldon Hawkes and the resident M.E. use a piece of tech that I thought was mildly interesting. Hawkes scanned a corpse using an MRI-like device (I think the word “MRI” was actually used, but I’m not sure), then uploaded the results into a computer. He then stood in a room much like a Star Trek holodeck, and created a 3-D holographic image of the corpse floating in the air in front of him. He was able to manually manipulate this image (in this case, pulling out and unwinding the intestines), and was able to find a wound that would’ve taken a lot of time and cutting IRL.
How far away are we from this kind of technology (or at least, how far away are we from it being in more common use than the very top government and private labs)?
We have portable 3D scanners. But those just record the outside dimensions of an object. The thing you’re talking about sounds like it’s a portable x-ray/MRI, which is…unlikely to come into being any time soon. And then the AI to turn a 3D representation of an x-ray into a working representation of a human body that can be modified like a human body inside a computer simulation…that’s just…nothing we’ll see for at least a hundred years.
You really think it’ll be a full century? Look at the difference between the original Nintendo and the Wii, about thirty years apart. The MRIs we have today, considering we didn’t even have the technology forty years ago. The iPhone vs. the brick phones in the early 80s. Technology increases at an exponential rate, and although it would be a difficult to implement, I think you’re vastly underestimating the speed at which computers and the like are developing.
At the moment, no one has any idea how to create a true AI system. We also have no ability to create and simulate the human body in a computer, nor would we have any idea how to do so. You would essentially need to grow a body since individually creating each sort of microbe and cell and coding how they interact, and where they’re located in relationship to one another, would take a hundred years of brute force creation. We’re made up of billions or zillions of individual pieces. True, you might not need that level of detail for basic ideas like muscles, bones, and organs, but like I said, you still need a super complex AI of some sort to translate a 3D snapshot of a lump of flesh into a -specific- human form complete with lumps and wounds. Both of those two computer/biology problems are not on the horizon for being solved, I am decently sure.
Being able to scan a body from the outside, upload to a computer, pass it to a holodeck, and then actually manipulate the innards of a human on said holodeck to detect an internal wound that’s otherwise undetectable from the outside?
Oh, it’ll be much more than a century. Hell, it took going from Star Trek to Star Trek TNG before we saw holodecks… and you had 100 years of separation in just those story lines!
I don’t see as much of a problem as everybody else – we’ve got the scanning technology (I don’t see any mention in the OP of this being done onsite), and then you could compute correlations between your scan and a 3d morphable model of a human being complete with guts and organs, the ways those are connected, which parts are which, and how they can move, and you get a 3d model of your corpse with fully movable parts. I’m basically thinking of something like this technology, only for the complete body, and not just for the face – you then give the software a set of points to compute the correspondences (like this), and your model can be morphed into an exact likeness of your scan, complete with moving parts.
Creating the model man might be a challenge, but it would only have to be done once, and I don’t see anything in principle forbidding anybody to go through this effort.
I see the real problem in the hologram technology – there are good ways to create floating 3d images (or at least the appearance thereof, with some Pepper’s Ghost-like techniques), and there are good ways to track motion in 3d space to serve as an interface, but I can’t really imagine a good way to combine the two – Pepper’s Ghost separates the viewer and the image by a pane of glass, for example. There are, however, ways of doing this in 2d, like the heliodisplay, and 2d is always just the right choice of glasses away from 3d.
‘Detecting an internal wound that’s otherwise undetectable from the outside’ is the whole point of what MRI’s and X-Rays do, isn’t it? The only difference here is the way they’re manipulating the scan. I don’t know much about these things but it doesn’t sound as far-fetched to me as everyone’s making it out to be.
I can’t speak for Santo, but I know that idea from Ray Kurzweil, who argues for a sort of ‘generalized’ Moore’s law – here’s a talk by him on the matter (among others).
We can already do the first two. It’s already a 3-D image, it’s just represented as a 2-D image using several layers. That’s why an MRI shows a few dozen pictures, they’re all small “slices” of the body.
The second one.
I was mostly speaking of Moore’s law, but since we’re talking about computers, I don’t see why it wouldn’t fit.
I worked for this company in the 90’s. At the time, they were developing more generalized versions of the viewing wand that would do real-time 3D internal imaging of the whole body. I heard that the inter-cranial version just got FDA approval a couple of years ago; I haven’t heard whether the more generalized version ever reached human trials.
Well, there’s always a stickler who points out that strictly speaking, Moore’s law as such only applies to transistor counts, so I merely wanted to pre-empt that.
Sage Rat, you’ve said twice that it’d have to use AI, and I don’t understand why. Doesn’t AI imply that the computer will be making its own decisions? In this case, the user will be supplying all the inputs, just as they would in any video game.
It needs AI to figure out where the coroner’s hand is and how it’s moving. Computer vision is currently at the point where if you split up the processing among three Playstation 3s, you can decide in slightly over a second whether a static image depicts a bar stool. Following a human hand in real time is completely beyond the capacity of today’s computers, and will remain so for some time.
Waldo devices could be used to facilitate this in real time. The trick in my opinion would be differentiation of anatomical structures by the computer to realize what is connected to what and how well…
Not to mention our ability to capture motion in real time is actually quite good, it just isn’t done by simple vision but by well defined points of reference.
Sure, but that’s not what was depicted on CSI. Since the OP was specifically asking how far away we are from that technology, the existence of motion capture or other alternatives doesn’t really enter into it.
I can’t comment on the computer part of the problem, but projecting a 3-d image like that is so far beyond our current technology that nobody has any clue how one could proceed with it. I won’t even call it a “hologram”, since the principles behind holograms are understood, and they can’t do things like that (not even for still images, and animating holograms is a very tricky technology that we’re years or decades away from). The basic problem is that any ray of light which intersects a point on a holographic image must actually intersect the glass plate that the hologram is encoded in. If you have a hologram which extends above its projector, then you can only see it from above, not from the side, and if you reach behind something on the image, your hand, even though it’s “behind” the image, will obstruct the part of the image it’s behind (a very disorienting effect, by the way).
They do the second one, which is what I said: we’ve already got the scanning technology that ‘detects internal wounds that are otherwise undetectable from the outside’, the problem is manipulating that scanning information into a holograph. That sounds difficult to me, but is it really going to take 100 years? To create holographs?