camera_stage_mapping.correlation_image_tracking

Utility functions to track motion of a microscope using FFT-based correlation.

Cross-correlation is a reasonable way to determine where an object is in an image. It can also be used to track 2D motion. locate_feature_in_image uses cross correlation followed by background-subracted centre of mass to find the location of a template with respect to an image.

This code is faster than the FFT-based code if you have a good idea of where the target is, and if the target is much smaller than the image, i.e. if the search area is small. If you are correlating whole images against each other, the FFT method is likely faster.

(c) Richard Bowman 2020, released under GNU GPL v3 No warranty, express or implied, is given with respect to this code.

Module Contents

Functions

central_half(image)

Return the central 50% (in X and Y) of an image

datum_pixel(image)

Get the datum pixel of an image - if no property is present, assume the central pixel.

locate_feature_in_image(image, feature[, margin, ...])

Find the given feature (small image) and return the position of its datum (or centre) in the image's pixels.

camera_stage_mapping.correlation_image_tracking.central_half(image)

Return the central 50% (in X and Y) of an image

camera_stage_mapping.correlation_image_tracking.datum_pixel(image)

Get the datum pixel of an image - if no property is present, assume the central pixel.

camera_stage_mapping.correlation_image_tracking.locate_feature_in_image(image, feature, margin=0, restrict=False, relative_to='top left')

Find the given feature (small image) and return the position of its datum (or centre) in the image’s pixels.

imagenumpy.array

The image in which to look.

featurenumpy.array

The feature to look for. Ideally should be an ImageWithLocation.

marginint (optional)

Make sure the feature image is at least this much smaller than the big image. NB this will take account of the image datum points - if the datum points are superimposed, there must be at least margin pixels on each side of the feature image.

restrictbool (optional, default False)

If set to true, restrict the search area to a square of (margin * 2 + 1) pixels centred on the pixel that most closely overlaps the datum points of the two images.

relative_tostring (optional, default “top left”)

We return the position of the centre (or datum pixel, if it’s got that metadata) of the feature, relative to either the top left (i.e. 0,0) pixel in the image, or the central pixel - to do the latter, set relative_to to “centre” (or “center” if you must).

The image must be larger than feature by a margin big enough to produce a meaningful search area. We use the OpenCV matchTemplate method to find the feature. The returned position is the position, relative to the corner of the first image, of the “datum pixel” of the feature image. If no datum pixel is specified, we assume it’s the centre of the image. The output of this function can be passed into the pixel_to_location() method of the larger image to yield the position in the sample of the feature you’re looking for.