The ability to match a blob of data as a human requires recognition capabilities. Generally that falls under the scope of AI software. The internals of everyone is different, so while–perhaps–being able to determine the pose of a human body and relate that to a pre-made model in the system is doable, that’s not what we’re talking about here. This wouldn’t be a premade model, it would be one crafted entirely from 3D data, all the way to the insides, and including all the blemishes and holes and whatnot where they are. That means the computer has to recognize what blotches on the scan are what organs, what they signify medically (like a puncture wound), and so on. Being able to look at even just a flat, 2D x-ray and recognize the medical ailments there is something that, in modern day, only a human can do. What’s being described here is of significantly greater difficulty.