We have a picture of a microscope slide of fly brains, we’ve managed to locate the centers of those brains and output them as a list of x,y points in pixels using imageJ.
We also have a list of coordinates of the centers of the fly brains as detected by a microscope, it uses a totally different coordinate system.
We need to line up the microscope brain center coordinates with the picture brain center coordinates. This is being made much harder by the fact that not all brains on the slide get imaged by the microscope so, the picture of the slide can contain a lot more brains than what the microscope outputs. Consequently the number of x,y brain center coordinates from the picture can be much larger then the number of coordinates from the microscope, and we can not assume that any x,y from the slide photo will show up in the microscope output.
Any dopers out there know if this is a well known problem, and what a straight forward solution to it might be?
The general name for what you’re trying to do is image registration: you want to find the transformation from one image coordinate space to another. In your case, since you have a number of locations known to be in both images, feature-point based registration is an obvious way to proceed.
This can (obviously) vary between easy and hard, depending on lots of specifics of your problem, like the number of available features (how many points do you expect to be in both images? how often are points missed or non-points detected in the two images?), the accuracy of their location estimates (are the estimates good to ±1 pixel? better? worse?), how regularly and how closely they are spaced (what’s the average distance between points?), whether you have a reasonable initial estimate for all or part of the transformation (e.g., do you know the relative scaling between the two coordinate systems?), etc. You can probably simplify things by assuming an affine or even rigid transformation, but that depends on the optics of the two camera systems.
One reasonable approach would be to use either initial knowledge or some rudimentary feature matching (e.g., distance ratios and angles) to try to get one or more approximate initial transformation estimates, then use a descent technique to try to improve these estimates.
Thats the approach we ended up taking. Our algorithm picks the distance between 2 of the microscope brain centers and considers that to be “1”, then it calculates the distances and angles to all the other centers. Then it goes through pairs of slide photo brain centers performing the same operation until the two sets of distances and angles match up. Thanks for the input everyone.