This tutorial summarizes the work done by Frederic Sulzman during his internship at ESRF during the summer 2015 entilted "Calibration for geometric distortion in multi- modules pixel detectors".
The overall strategy is very similar to "CCD calibration" tutorial with some specificities due to the modular nature of the detector.
Each module being made by lithographic processes, the error within a module will be assumeed to be constant. We will use the name "displacement of the module" to describe the rigide movement of the module.
This tutorial uses data from the Pilatus3 2M CdTe from the ID15 beam line of the ESRF. They provided not only the intersnip subject but also the couple of images uses to calibrate the detector.
This detector contains 48 half-modules, each bound to a single CdTe monocrystal sensor and is designed for high energy X-ray radiation detection. Due to the construction procedure, these half-modules could show a misalignment within the detector plane. While the manufacturer (Dectris) garanties a precision within a pixel (172µm), the miss-alignment of certain modules can be seen while calibrating Debye-Scherrer ring using refereance sample. So the aim of this work is to provide a detector description with a better precision better than the original detector.
This work will be performed on the image of a grid available: http://www.silx.org/pub/pyFAI/detector_calibration/Pilatus2MCdTe_ID15_grid_plus_sample_0004.cbf and the scattering of ceria (CeO2) at 72.1keV available here. http://www.silx.org/pub/pyFAI/detector_calibration/Pilatus2MCdTe_ID15_CeO2_72100eV_800mm_0000.cbf
It is a good exercise to calibrate all rings of the later image using the pyFAI-calib2 tool. A calibration close to perfection is needed to visualize the module miss-alignement we aim at correcting.
%matplotlib notebook
#many imports which will be used all along the notebook
import time
start_time = time.time()
import os
import pyFAI
import fabio
import glob
import numpy
from math import sin, cos, sqrt
from scipy.ndimage import convolve, binary_dilation
from scipy.spatial import distance_matrix
from scipy.optimize import minimize
from matplotlib.pyplot import subplots
from pyFAI.ext.bilinear import Bilinear
from pyFAI.ext.watershed import InverseWatershed
from silx.resources import ExternalResources
print("Using pyFAI verison: ", pyFAI.version)
#Nota: comment out when running from outside ESRF
#os.environ["http_proxy"] = "http://proxy.esrf.fr:3128"
downloader = ExternalResources("detector_calibration", "http://www.silx.org/pub/pyFAI/detector_calibration/")
ring_file = downloader.getfile("Pilatus2MCdTe_ID15_CeO2_72100eV_800mm_0000.cbf")
print(ring_file)
grid_file = downloader.getfile("Pilatus2MCdTe_ID15_grid_plus_sample_0004.cbf")
print(grid_file)
rings = fabio.open(ring_file).data
img = fabio.open(grid_file).data
fig,ax = subplots(1,2, figsize=(10,5))
ax[0].imshow(img.clip(0,1000), interpolation="bilinear")
ax[0].set_title("grid")
ax[1].imshow(numpy.arcsinh(rings), interpolation="bilinear")
ax[1].set_title("rings")
There are 3 pre-processing steps which are needed.
# This is the default detector as definied in pyFAI according to the specification provided by Dectris:
pilatus = pyFAI.detector_factory("Pilatus_2m_CdTe")
print(pilatus)
mask1 = pilatus.mask
module_size = pilatus.MODULE_SIZE
module_gap = pilatus.MODULE_GAP
submodule_size = (96,60)
#1 + 2 Calculation of the module_id and the interpolated-mask:
mid = numpy.zeros(pilatus.shape, dtype=int)
mask2 = numpy.zeros(pilatus.shape, dtype=int)
idx = 1
for i in range(8):
y_start = i*(module_gap[0] + module_size[0])
y_stop = y_start + module_size[0]
for j in range(3):
x_start = j*(module_gap[1] + module_size[1])
x_stop = x_start + module_size[1]
mid[y_start:y_stop,x_start: x_start+module_size[1]//2] = idx
idx+=1
mid[y_start:y_stop,x_start+module_size[1]//2: x_stop] = idx
idx+=1
mask2[y_start+submodule_size[0]-1:y_start+submodule_size[0]+2,
x_start:x_stop] = 1
for k in range(1,8):
mask2[y_start:y_stop,
x_start+k*(submodule_size[1]+1)-1:x_start+k*(submodule_size[1]+1)+2] = 1
#Extra masking
mask0 = img<0
#Those pixel are miss-behaving... they are the hot pixels next to the beam-stop
mask0[915:922,793:800] = 1
mask0[817:820,747:750] = 1
fig,ax = subplots(1,3, figsize=(10,4))
ax[0].imshow(mid, interpolation="bilinear")
ax[0].set_title("Module Id")
ax[1].imshow(mask2+mask1+mask0, interpolation="bilinear")
ax[1].set_title("Combined mask")
nimg = img.astype(float)
nimg[numpy.where(mask0+mask1+mask2)] = numpy.nan
ax[2].imshow(nimg)#, interpolation="bilinear")
ax[2].set_title("Nan masked image")
# The Nan-masked image contains now only valid values (and Nan elsewhere). We will make a large median filter to
# build up a smooth image without gaps.
#
# This function is backported from future version of numpy ... it allows to expose a winbowed view
# to perform the nanmedian-filter
from numpy.lib.stride_tricks import as_strided
def sliding_window_view(x, shape, subok=False, readonly=True):
"""
Creates sliding window views of the N dimensional array with the given window
shape. Window slides across each dimension of `x` and extract subsets of `x`
at any window position.
Parameters
----------
x : array_like
Array to create sliding window views of.
shape : sequence of int
The shape of the window. Must have same length as the number of input array dimensions.
subok : bool, optional
If True, then sub-classes will be passed-through, otherwise the returned
array will be forced to be a base-class array (default).
readonly : bool, optional
If set to True, the returned array will always be readonly view.
Otherwise it will return writable copies(see Notes).
Returns
-------
view : ndarray
Sliding window views (or copies) of `x`. view.shape = x.shape - shape + 1
See also
--------
as_strided: Create a view into the array with the given shape and strides.
broadcast_to: broadcast an array to a given shape.
Notes
-----
``sliding_window_view`` create sliding window views of the N dimensions array
with the given window shape and its implementation based on ``as_strided``.
Please note that if readonly set to True, views are returned, not copies
of array. In this case, write operations could be unpredictable, so the returned
views are readonly. Bear in mind that returned copies (readonly=False) will
take more memory than the original array, due to overlapping windows.
For some cases there may be more efficient approaches to calculate transformations
across multi-dimensional arrays, for instance `scipy.signal.fftconvolve`, where combining
the iterating step with the calculation itself while storing partial results can result
in significant speedups.
Examples
--------
>>> i, j = np.ogrid[:3,:4]
>>> x = 10*i + j
>>> shape = (2,2)
>>> np.lib.stride_tricks.sliding_window_view(x, shape)
array([[[[ 0, 1],
[10, 11]],
[[ 1, 2],
[11, 12]],
[[ 2, 3],
[12, 13]]],
[[[10, 11],
[20, 21]],
[[11, 12],
[21, 22]],
[[12, 13],
[22, 23]]]])
"""
np = numpy
# first convert input to array, possibly keeping subclass
x = np.array(x, copy=False, subok=subok)
try:
shape = np.array(shape, np.int)
except:
raise TypeError('`shape` must be a sequence of integer')
else:
if shape.ndim > 1:
raise ValueError('`shape` must be one-dimensional sequence of integer')
if len(x.shape) != len(shape):
raise ValueError("`shape` length doesn't match with input array dimensions")
if np.any(shape <= 0):
raise ValueError('`shape` cannot contain non-positive value')
o = np.array(x.shape) - shape + 1 # output shape
if np.any(o <= 0):
raise ValueError('window shape cannot larger than input array shape')
if type(readonly) != bool:
raise TypeError('readonly must be a boolean')
strides = x.strides
view_strides = strides
view_shape = np.concatenate((o, shape), axis=0)
view_strides = np.concatenate((view_strides, strides), axis=0)
view = as_strided(x, view_shape, view_strides, subok=subok, writeable=not readonly)
if not readonly:
return view.copy()
else:
return view
%%time
#Calculate a background image using a large median filter ... takes a while
shape = (19,11)
print(nimg.shape)
padded = numpy.pad(nimg, tuple((i//2,) for i in shape), mode="edge")
print(padded.shape)
background = numpy.nanmedian(sliding_window_view(padded,shape), axis = (-2,-1))
print(background.shape)
fig,ax = subplots()
ax.imshow(background)
ax.set_title("Background image")
fig,ax = subplots(1,2, figsize=(9,5))
normalized = (nimg/background)
low = numpy.nanmin(normalized)
high = numpy.nanmax(normalized)
print(low, high)
normalized[numpy.isnan(normalized)] = 0
normalized /= high
ax[0].imshow(normalized)
ax[0].set_title("Normalized image")
ax[1].hist(normalized.ravel(), 100, range=(0,1))
ax[1].set_title("Histogram of intensities in normalized image")
For a precise measurement of the peak position, one trick is to convolve the image with a pattern which looks like a hole of the grid.
#print the profile of the normalized image: the center is difficult to measure due to the small size of the hole.
fig,ax = subplots(2)
ax[0].plot(normalized[:,545])
ax[1].plot(normalized[536,:])
#Definition of the convolution kernel
ksize = 5
y,x = numpy.ogrid[-(ksize-1)//2:ksize//2+1,-(ksize-1)//2:ksize//2+1]
d = numpy.sqrt(y*y+x*x)
#Fade out curve definition
fadeout = lambda x: 1/(1+numpy.exp(5*(x-2.5)))
kernel = fadeout(d)
mini=kernel.sum()
print(mini)
fig,ax = subplots(1,3)
ax[0].imshow(d)
ax[0].set_title("Distance array")
ax[1].plot(numpy.linspace(0,5,100),fadeout(numpy.linspace(0,5,100)))
ax[1].set_title("fade-out curve")
ax[2].imshow(kernel)
ax[2].set_title("Convolution kernel")
my_smooth = convolve(normalized, kernel, mode="constant", cval=0)/mini
print(my_smooth.shape)
fig,ax = subplots(1,2)
ax[0].imshow(normalized.clip(0,1))
ax[0].set_ylim(1050,1100)
ax[0].set_xlim(300,350)
ax[1].imshow(my_smooth.clip(0,1))
ax[1].set_ylim(1050,1100)
ax[1].set_xlim(300,350)
numpy.where(my_smooth == my_smooth.max())
#mask out all pixels too close to any masked position
all_masks = numpy.logical_or(numpy.logical_or(mask0,mask1),mask2)
print(all_masks.sum())
big_mask = binary_dilation(all_masks, iterations=ksize//2+1+1)
print(big_mask.sum())
smooth2 = my_smooth.copy()
smooth2[big_mask] = 0
fig,ax = subplots()
ax.imshow(smooth2)
#Display the profile of the smoothed image: the center is easy to measure thanks to the smoothness of the signal
fig,ax = subplots(2)
ax[0].plot(my_smooth[:,545])
ax[1].plot(my_smooth[536,:])
We use the watershed module from pyFAI to retrieve all peak positions. Those regions are sieved out respectively for:
iw = InverseWatershed(my_smooth)
iw.init()
iw.merge_singleton()
all_regions = set(iw.regions.values())
regions = [i for i in all_regions if i.size>mini]
print("Number of region segmented: %s"%len(all_regions))
print("Number of large enough regions : %s"%len(regions))
#Remove peaks on masked region
sieved_region = [i for i in regions if not big_mask[(i.index//nimg.shape[-1], i.index%nimg.shape[-1])]]
print("Number of peaks not on masked areea : %s"%len(sieved_region))
# Histogram of peak height:
s = numpy.array([i.maxi for i in sieved_region])
fig, ax = subplots()
ax.hist(s, 100)
#sieve-out for peak intensity
int_mini = 0.1
peaks = [(i.index//nimg.shape[-1], i.index%nimg.shape[-1]) for i in sieved_region if (i.maxi)>int_mini]
print("Number of remaining peaks with I>%s: %s"%(int_mini, len(peaks)))
peaks_raw = numpy.array(peaks)
# Finally the peak positions are interpolated using a second order taylor expansion
# in thevinicy of the maximum value of the signal:
#Create bilinear interpolator
bl = Bilinear(my_smooth)
#Overlay raw peak coordinate and refined peak positions
ref_peaks = [bl.local_maxi(p) for p in peaks]
fig, ax = subplots()
ax.imshow(img.clip(0,1000), interpolation="nearest")
peaks_ref = numpy.array(ref_peaks)
ax.plot(peaks_raw[:,1], peaks_raw[:, 0], ".r")
ax.plot(peaks_ref[:,1],peaks_ref[:, 0], ".b")
ax.set_title("Extracted peak position (red: raw, blue: refined)")
print("Refined peak coordinate:")
print(ref_peaks[:10])
At this stage we have about 2000 peaks (with sub-pixel precision) which are visually distributed on all modules. Some modules have their peaks located along sub-module boundaries which are masked out, hence they have fewer ontrol point for the calculation. Let's assign each peak to a module identifier. This allows to print out the number of peaks per module:
dt = numpy.dtype([('y', numpy.float64),
('x', numpy.float64),
('i', numpy.int64),
])
yxi = numpy.array([i+(mid[round(i[0]),round(i[1])],)
for i in ref_peaks], dtype=dt)
print("Number of keypoint per module:")
for i in range(1,mid.max()+1):
print("Module id:",i, "cp:", (yxi[:]["i"] == i).sum())
The calibration is performed using a regular grid, the idea is to assign to each peak of coordinates (x,y) the integer value (X, Y) which correspond to the grid corrdinate system.
The first step is to measure the grid pitch which correspond to the distance (in pixels) from one peak to the next. This is easily obtained from a pair-wise distribution function.
# pairwise distance calculation using scipy.spatial.distance_matrix
dist = distance_matrix(peaks_ref, peaks_ref)
fig, ax = subplots()
ax.hist(dist.ravel(), 100, range=(0,100))
ax.set_title("Pair-wise distribution function")
The histogram of the pair-distribution function has a first peak at 0 and the second peak between 29 and 30. Let's start the fit with this value
Two other parameters correspond to the offset, in pixel for the grid index (X,Y) = (0,0). The easiest is to measure the smallest x and y for the first module.
#from pair-wise distribution histogram
step = 29
#work with the first module and fit the peak positions
first = yxi[yxi[:]["i"] == 1]
y_min = first[:]["y"].min()
x_min = first[:]["x"].min()
print("offset for the first peak: ", x_min, y_min)
The grid looks very well aligned with the axes which makes this step easier but nothing garanties it is perfect, so the rotation of the grid has to be measured as well.
The default rotation will be zero and will be fitted later on.
Once the indexes X,Y determined for eack peak, one can fit the parameter to properly align the grid with the first module. Those 4 parameters are step-size, x_min, y_min and angle
#Assign each peak to an index
dl = numpy.dtype([('y', numpy.float64),
('x', numpy.float64),
('i', numpy.int64),
('Y', numpy.int64),
('X', numpy.int64),
])
indexed1 = numpy.zeros(len(first), dtype=dl)
for i,v in enumerate(first):
Y = int(round((v["y"]-y_min)/step))
X = int(round((v["x"]-x_min)/step))
indexed1[i]["y"] = v["y"]
indexed1[i]["x"] = v["x"]
indexed1[i]["i"] = v["i"]
indexed1[i]["Y"] = Y
indexed1[i]["X"] = X
print("peak id: %s %20s Y:%d (Δ=%.3f) X:%s (Δ=%.3f)"%
(i,v, Y, (v["y"]-Y*step-y_min)/step, X, (v["x"]-X*step-x_min)/step))
The error in positionning each of the pixel is less than 0.1 pixel which is already excellent and will allow a straight forward fit.
The cost function for the first module is calculated as the sum of distances squared in pixel space. It uses 4 parameters which are step-size, x_min, y_min and angle
guess1 = [step, y_min, x_min, 0]
def cost1(param):
"""contains: step, y_min, x_min, angle for the first module
returns the sum of distance squared in pixel space
"""
step = param[0]
y_min = param[1]
x_min = param[2]
angle = param[3]
XY = numpy.vstack((indexed1["X"], indexed1["Y"]))
rot = [[cos(angle),-sin(angle)],
[sin(angle), cos(angle)]]
xy_min = [[x_min], [y_min]]
xy_guess = numpy.dot(rot, step*XY+xy_min)
delta = xy_guess - numpy.vstack((indexed1["x"], indexed1["y"]))
return (delta*delta).sum()
print("Before optimization", guess1, "cost=", cost1(guess1))
res1 = minimize(cost1, guess1, method = "slsqp")
print(res1)
print("After optimization", res1.x, "cost=", cost1(res1.x))
print("Average displacement (pixels): ",sqrt(cost1(res1.x)/len(indexed1)))
At this step, the grid is perfectly aligned with the first module. This module is used as the reference one and all other are aligned along it, using this first fit:
#retrieve the result of the first module fit:
step, y_min, x_min, angle = res1.x
indexed = numpy.zeros(yxi.shape, dtype=dl)
rot = [[cos(angle),-sin(angle)],
[sin(angle), cos(angle)]]
irot = [[cos(angle), sin(angle)],
[-sin(angle), cos(angle)]]
print("cost1: ",cost1([step, y_min, x_min, angle]), "for:", step, y_min, x_min, angle)
xy_min = numpy.array([[x_min], [y_min]])
xy = numpy.vstack((yxi["x"], yxi["y"]))
indexed["y"] = yxi["y"]
indexed["x"] = yxi["x"]
indexed["i"] = yxi["i"]
XY_app = (numpy.dot(irot, xy)-xy_min) / step
XY_int = numpy.round((XY_app)).astype("int")
indexed["X"] = XY_int[0]
indexed["Y"] = XY_int[1]
xy_guess = numpy.dot(rot, step * XY_int + xy_min)
thres = 1.2
delta = abs(xy_guess - xy)
print((delta>thres).sum())
suspicious = indexed[numpy.where(abs(delta>thres))[1]]
print(suspicious)
fig,ax = subplots()
ax.imshow(img.clip(0,1000))
ax.plot(indexed["x"], indexed["y"],".g")
ax.plot(suspicious["x"], suspicious["y"],".r")
Only 6 peaks have an initial displacement of more than 1.2 pixel, all located in modules 40 and 46. The visual inspection confirms their localization is valid.
There are 48 (half-)modules which have each of them 2 translations and one rotation. In addition to the step size, this represents 145 degrees of freedom for the fit. The first module is used to align the grid, all other modules are then aligned along this grid.
def submodule_cost(param, module=1):
"""contains: step, y_min_1, x_min_1, angle_1, y_min_2, x_min_2, angle_2, ...
returns the sum of distance squared in pixel space
"""
step = param[0]
y_min1 = param[1]
x_min1 = param[2]
angle1 = param[3]
mask = indexed["i"] == module
substack = indexed[mask]
XY = numpy.vstack((substack["X"], substack["Y"]))
rot1 = [[cos(angle1), -sin(angle1)],
[sin(angle1), cos(angle1)]]
xy_min1 = numpy.array([[x_min1], [y_min1]])
xy_guess1 = numpy.dot(rot1, step * XY + xy_min1)
#This is guessed spot position for module #1
if module == 1:
"Not much to do for module 1"
delta = xy_guess1 - numpy.vstack((substack["x"], substack["y"]))
else:
"perform the correction for given module"
y_min = param[(module-1)*3+1]
x_min = param[(module-1)*3+2]
angle = param[(module-1)*3+3]
rot = numpy.array([[cos(angle),-sin(angle)],
[sin(angle), cos(angle)]])
xy_min = numpy.array([[x_min], [y_min]])
xy_guess = numpy.dot(rot, xy_guess1+xy_min)
delta = xy_guess - numpy.vstack((substack["x"], substack["y"]))
return (delta*delta).sum()
guess145 = numpy.zeros(48*3+1)
guess145[:4] = res1.x
for i in range(1, 49):
print("Cost for module #",i, submodule_cost(guess145, i))
On retrieves that the modules 40 and 46 have large errors. Module 37 as well.
The total cost funtion is hence the sum of all cost function for all modules:
def total_cost(param):
"""contains: step, y_min_1, x_min_1, angle_1, ...
returns the sum of distance squared in pixel space
"""
return sum(submodule_cost(param, module=i) for i in range(1,49))
total_cost(guess145)
%%time
print("Before optimization", guess145[:10], "cost=", total_cost(guess145))
res_all = minimize(total_cost, guess145, method = "slsqp")
print(res_all)
print("After optimization", res_all.x[:10], "cost=", total_cost(res_all.x))
for i in range(1,49):
print("Module id: %d cost: %.3f Δx: %.3f, Δy: %.3f rot: %.3f°"%
(i, submodule_cost(res_all.x, i), res_all.x[-2+i*3], res_all.x[-1+i*3], numpy.rad2deg(res_all.x[i*3])))
Analysis: Modules 40, 46 and 48 show large displacement but the fitting precedure allowed to reduce the residual cost to the same value as other modules.
The pixel position can be obtained from the standard Pilatus detector. Each module is then displaced according to the fitted values, except the first one which is left where it is.
pixel_coord = pyFAI.detector_factory("Pilatus2MCdTe").get_pixel_corners()
pixel_coord_raw = pixel_coord.copy()
for i in range(2, 49):
# Extract the pixel corners for one module
module_idx = numpy.where(mid == i)
one_module = pixel_coord_raw[module_idx]
#retrieve the fitted values
dy = res_all.x[1+(i-1)*3]
dx = res_all.x[2+(i-1)*3]
angle = res_all.x[3+(i-1)*3]
z = one_module[...,0]
y = one_module[...,1]
x = one_module[...,2]
#apply the correction the other way around
irot = [[cos(angle), sin(angle)],
[-sin(angle), cos(angle)]]
trans = [[dx * pilatus.pixel2],
[dy * pilatus.pixel1]]
xy_guess = numpy.vstack((x.ravel(), y.ravel()))
xy_cor = numpy.dot(irot, xy_guess) - trans
xy_cor.shape = ((2,)+x.shape)
one_module[...,1] = xy_cor[1, ...] #y
one_module[...,2] = xy_cor[0, ...] #x
#Update the array
pixel_coord_raw[module_idx] = one_module
pilatus.set_pixel_corners(pixel_coord_raw)
pilatus.mask = all_masks
pilatus.save("Pilatus_ID15_raw.h5")
displ = numpy.sqrt(((pixel_coord - pixel_coord_raw)**2).sum(axis=-1))
displ /= pilatus.pixel1 #convert in pixel units
fig, ax = subplots()
ax.hist(displ.ravel(), 100)
ax.set_title("Displacement of pixels versus the reference representation")
ax.set_xlabel("Error in pixel size (172µm)")
unmasked = numpy.logical_not(all_masks)
misaligned = numpy.vstack((pixel_coord_raw[unmasked][..., 2].ravel(), #x
pixel_coord_raw[unmasked][..., 1].ravel())) #y
reference = numpy.vstack((pixel_coord[unmasked][..., 2].ravel(), #x
pixel_coord[unmasked][..., 1].ravel())) #y
def global_displacement(param):
"""correct all pixel position with dy. dx, angle"""
dy, dx, angle = param
rot = [[cos(angle), -sin(angle)],
[sin(angle), cos(angle)]]
trans = [[dx],
[dy]]
xy_corr = numpy.dot(rot, misaligned) + trans
delta = xy_corr - reference
return (delta*delta).sum()
global_displacement([0,0,0])
%%time
guess_global = [0,0,0]
print("Before optimization", guess_global, "cost=", global_displacement(guess_global))
res_global = minimize(global_displacement, guess_global, method = "slsqp")
print(res_global)
print("After optimization", res_global.x, "cost=", global_displacement(res_global.x))
dy, dx, angle = res_global.x
rot = [[cos(angle), -sin(angle)],
[sin(angle), cos(angle)]]
trans = [[dx],
[dy]]
raw = numpy.vstack((pixel_coord_raw[..., 2].ravel(), #x
pixel_coord_raw[..., 1].ravel())) #y
aligned = numpy.dot(rot, raw) + trans
pixel_coord_ref = pixel_coord.copy()
pixel_coord_ref[...,1] = aligned[1,:].reshape(pixel_coord_ref.shape[:-1])
pixel_coord_ref[...,2] = aligned[0,:].reshape(pixel_coord_ref.shape[:-1])
displ = numpy.sqrt(((pixel_coord-pixel_coord_ref)**2).sum(axis=-1))
displ /= pilatus.pixel1 #convert in pixel units
fig, ax = subplots()
ax.hist(displ.ravel(), 100)
ax.set_title("Displacement of pixels versus the reference representation")
ax.set_xlabel("Pixel size (172µm)")
pilatus.set_pixel_corners(pixel_coord_ref)
pilatus.mask = all_masks
pilatus.save("Pilatus_ID15_ref.h5")
This tutorial presents the way to calibrate a module based detector using the Pilatus2M CdTe from ESRF-ID15. The HDF5 file generated is directly useable by any parts of pyFAI, the reader is invited in calibrating the rings images with the default definition and with this optimized definition and check the residual error is almost divided by a factor two.
To come back on the precision of the localization of the pixel: not all the pixel are within the specifications provided by Dectris which claims the misaliment of the modules is within one pixel.
print("Total execution time: ", time.time()-start_time)