Image Analysis/Processing

Diffraction patterns analysis is essentially specialized image processing. This tutorial will show some of the image processing and analysis techniques that are part of the scikit-ued.

Note

Use scikit-ued in combination with npstreams to process electron diffraction data in parallel.

Reading Images

Diffraction patterns can come in a variety of exotic file formats. Scikit-ued has built-in support for the following file formats:

  • Gatan’s closed source DM3 and DM4 (*.dm3, *.dm4);
  • Merlin Image Binary (*.mib);
  • TIFF images (*.tif, *.tiff);
  • All other file formats supported by scikit-image.

The skued.diffread() function will transparently distinguish between those formats and dispatch to the right functions.

Diffraction pattern alignment

Diffraction patterns can drift over a period of a few minutes, and for reliable data synthesis it is important to align patterns to a reference.

The procedure of detecting, or registering, the translation between two similar images is usually done by measuring the cross-correlation between images. When images are very similar, this procedure is fine; take a look at scikit-image’s skimage.feature.register_translation() for example.

However, diffraction patterns all have a fixed feature: the position of the beam-block. Therefore, some pixels in each diffraction pattern must be ignored in the computation of the cross-correlation.

Setting the ‘invalid pixels’ to 0 will not work, at those will correlate with the invalid pixels from the reference. One must use the masked normalized cross-correlation through scikit-ued’s mnxc2().

All of this is taken care of in scikit-ued’s diff_register() function. Let’s look at some polycrystalline Chromium:

(Source code, png, hires.png, pdf)

../_images/image-1.png

From the difference pattern, we can see that the ‘Data’ pattern is shifted from ‘Reference’ quite a bit. To determine the exact shift, we need to use a mask that obscures the beam-block and main beam:

from skued import diff_register, shift_image, diffread
import numpy as np

ref = diffread('Cr_1.tif')
im = diffread('Cr_2.tif')

mask = np.zeros_like(ref, dtype = np.bool)
mask[0:1250, 950:1250] = True

shift = diff_register(im, reference = ref, mask = mask)
im = shift_image(im, shift)

Let’s look at the difference:

(Source code, png, hires.png, pdf)

../_images/image-2.png

Image processing involving symmetry

Rotational symmetry

Diffraction patterns exhibit rotational symmetry based on the crystal structure. We can take advantage of such symmetry to correct images in case of artifacts or defects. A useful routine is nfold(), which averages portions of a diffraction pattern with itself based on rotational symmetry.

(Source code, png, hires.png, pdf)

../_images/image-3.png

To use nfold(), all you need to know is the center of the diffraction pattern:

from skued import nfold, diffread

im = diffread('graphite.tif')
av = nfold(im, mod = 6, center = center)    # mask is optional

Pixel Masks

Image data can be rejected on a per-pixel basis by using pixel masks. These masks are represented by boolean arrays that evaluate to True on invalid pixels.

scikit-ued offers some functions related to creation and manipulation of pixel masks.

Creation of a pixel mask

A pixel mask can be created from a set of images sharing the same properties. For example, diffraction patterns before photoexcitation (i.e. dark runs) form a set of images that should be identical.

Let’s imaging a set of such images with filenames dark_run_*.tif. We can create a pixel mask with the mask_from_collection():

from glob import iglob
from skued import mask_from_collection, diffread

dark_runs = map(diffread, iglob('dark_run_*.tif'))    # Can be a huge stack of images
mask = mask_from_collection(dark_runs)

In the above example, pixel values outside opf the [0, 30000] range will be marked as invalid (default behaviour). Moreover, the per-pixel standard deviation over the image set is computed; pixels that fluctuate too much are also rejected.

Note that since mask_from_collection() uses npstreams under the hood, the collection used to compute the mask can be huge.

Image analysis on polycrystalline diffraction patterns

Center-finding

Polycrystalline diffraction patterns display concentric rings, and finding the center of those concentric rings is important. Let’s load a test image:

(Source code, png, hires.png, pdf)

../_images/image-4.png

This is a noisy diffraction pattern of polycrystalline vanadium dioxide. Finding the center of such a symmetry pattern can be done with the powder_center() routine:

from skued import powder_center
ic, jc = powder_center(im, mask = mask)

# Plotting the center as a black disk
import numpy as np
ii, jj = np.meshgrid(np.arange(im.shape[0]), np.arange(im.shape[1]),
                     indexing = 'ij')
rr = np.sqrt((ii - ic)**2 + (jj - jc)**2)
im[rr < 100] = 0

plt.imshow(im, vmax = 200)
plt.show()

(Source code, png, hires.png, pdf)

../_images/image-5.png

Angular average

First, we create a test image:

import numpy as np
import matplotlib.pyplot as plt
from skued import gaussian

image = np.zeros( (256, 256) )
xc, yc = image.shape[0]/2, image.shape[1]/2     # center

extent = np.arange(0, image.shape[0])
xx, yy = np.meshgrid(extent, extent)
rr = np.sqrt((xx - xc)**2 + (yy-yc)**2)
image += gaussian([xx, yy], center = [xc, yc], fwhm = 200)
image[np.logical_and(rr < 40, rr > 38)] = 1
image[np.logical_and(rr < 100, rr > 98)] = 0.5
image /= image.max()    # Normalize max to 1
image += np.random.random(size = image.shape)

plt.imshow(image)
plt.show()

(Source code, png, hires.png, pdf)

../_images/image-6.png

… and we can easily compute an angular average:

from skued import azimuthal_average

radius, intensity = azimuthal_average(image, (xc, yc))

plt.plot(radius, intensity)

(Source code, png, hires.png, pdf)

../_images/image-7.png

Bonus : Removing Hot Spots

An interesting use-case of baseline-removal (described in Baseline-determination) is the removal of hot spots from images.

Consider the following diffraction pattern:

(Source code, png, hires.png, pdf)

../_images/image-8.png

We can consider the image without hotspots as the baseline of the image with hotspots

from skued import diffread, baseline_dwt

im = diffread('hotspots.tif')
denoised = baseline_dwt(im, max_iter = 250, level = 1, wavelet = 'sym2', axis = (0, 1))

The result is plotted below:

(Source code, png, hires.png, pdf)

../_images/image-9.png

Try different combinations of wavelets, levels, and number of iterations (max_iter).

Notice that the baseline-removal function used in the case of an image is baseline_dwt(), which works on 2D arrays. The same is not possible with baseline_dt(), which only works on 1D arrays at this time.

Return to Top