Integration with Python¶
This cookbook explains you how to perform azimuthal integration using the python interpreter. It is divided in two parts, the first part uses purely python while the second will use some advanced feature of the Jupyter notebook.
We will re-use the same files as is the other tutorials.
Performing azimuthal integration with pyFAI of a bunch of images¶
To be able to perform the azimuthal integration of some images, one needs:
- The diffraction images themselves, in this example they are stored as TIFF files
- The geometry of the experimental setup as obtained from the calibration and stored as a PONI-file
- other files like flat-field, dark current images or detector distortion file (spline-fle).
Image file: http://www.silx.org/pub/pyFAI/cookbook/calibration/LaB6_29.4keV.tif
Detector distortion file: http://www.silx.org/pub/pyFAI/cookbook/calibration/F_K4320T_Cam43_30012013_distorsion.spline
The calibration has been performed in the previous cookbook. The geometry is saved in “LaB6_29.4keV.poni”.
Basic usage of pyFAI¶
To perform azimuthal averaging, one can use the pyFAI and FabIO libraries, the former to load the geometry and later to read the image:
In [1]:
import pyFAI, fabio
img = fabio.open("LaB6_29.4keV.tif")
print("image:", img)
ai = pyFAI.load("LaB6_29.4keV.poni")
print("\nIntegrator: \n", ai)
/mntdirect/_scisoft/users/jupyter/jupy35/lib/python3.5/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
from ._conv import register_converters as _register_converters
image: <fabio.tifimage.TifImage object at 0x7fc810670e48>
Integrator:
Detector Detector Spline= /users/kieffer/workspace-400/pyFAI/doc/source/usage/cookbook/F_K4320T_Cam43_30012013_distorsion.spline PixelSize= 5.168e-05, 5.126e-05 m
Wavelength= 4.217150e-11m
SampleDetDist= 1.182208e-01m PONI= 5.394843e-02, 5.551600e-02m rot1=0.006974 rot2= -0.003313 rot3= -0.000000 rad
DirectBeamDist= 118.224mm Center: x=1066.839, y=1036.336 pix Tilt=0.442 deg tiltPlanRotation= -154.594 deg
Azimuthal averaging using pyFAI¶
One needs first to retrieve the image as a numpy array. This allows to use other libraries than FabIO for image reading, for example HDF5.
This shows how to perform the azimuthal integration of one image over 1000 bins:
In [2]:
img_array = img.data
print("img_array:", type(img_array), img_array.shape, img_array.dtype)
res = ai.integrate1d(img_array,
1000,
unit="2th_deg",
filename="integrated.dat")
img_array: <class 'numpy.ndarray'> (2048, 2048) float32
Note: There are 2 mandatory parameters for this method, the 2D-numpy array with the image and the number of bins. We specified in addition the name of the file, where to save the data and the unit for performing the integration.
There are many other options to integrate1d:
In [3]:
help(ai.integrate1d)
Help on method integrate1d in module pyFAI.azimuthalIntegrator:
integrate1d(data, npt, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method='csr', unit=q_nm^-1, safe=True, normalization_factor=1.0, block_size=32, profile=False, all=False, metadata=None) method of pyFAI.azimuthalIntegrator.AzimuthalIntegrator instance
Calculate the azimuthal integrated Saxs curve in q(nm^-1) by default
Multi algorithm implementation (tries to be bullet proof), suitable for SAXS, WAXS, ... and much more
:param data: 2D array from the Detector/CCD camera
:type data: ndarray
:param npt: number of points in the output pattern
:type npt: int
:param filename: output filename in 2/3 column ascii format
:type filename: str
:param correctSolidAngle: correct for solid angle of each pixel if True
:type correctSolidAngle: bool
:param variance: array containing the variance of the data. If not available, no error propagation is done
:type variance: ndarray
:param error_model: When the variance is unknown, an error model can be given: "poisson" (variance = I), "azimuthal" (variance = (I-<I>)^2)
:type error_model: str
:param radial_range: The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
:type radial_range: (float, float), optional
:param azimuth_range: The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
:type azimuth_range: (float, float), optional
:param mask: array (same size as image) with 1 for masked pixels, and 0 for valid pixels
:type mask: ndarray
:param dummy: value for dead/masked pixels
:type dummy: float
:param delta_dummy: precision for dummy value
:type delta_dummy: float
:param polarization_factor: polarization factor between -1 (vertical) and +1 (horizontal).
0 for circular polarization or random,
None for no correction,
True for using the former correction
:type polarization_factor: float
:param dark: dark noise image
:type dark: ndarray
:param flat: flat field image
:type flat: ndarray
:param method: can be "numpy", "cython", "BBox" or "splitpixel", "lut", "csr", "nosplit_csr", "full_csr", "lut_ocl" and "csr_ocl" if you want to go on GPU. To Specify the device: "csr_ocl_1,2"
:type method: str
:param unit: Output units, can be "q_nm^-1", "q_A^-1", "2th_deg", "2th_rad", "r_mm" for now
:type unit: pyFAI.units.Unit
:param safe: Do some extra checks to ensure LUT/CSR is still valid. False is faster.
:type safe: bool
:param normalization_factor: Value of a normalization monitor
:type normalization_factor: float
:param block_size: size of the block for OpenCL integration (unused?)
:param profile: set to True to enable profiling in OpenCL
:param all: if true return a dictionary with many more parameters (deprecated, please refer to the documentation of Integrate1dResult).
:type all: bool
:param metadata: JSON serializable object containing the metadata, usually a dictionary.
:return: q/2th/r bins center positions and regrouped intensity (and error array if variance or variance model provided), uneless all==True.
:rtype: Integrate1dResult, dict
The result file contains the integrated data with some headers as shown:
In [4]:
with open("integrated.dat") as f:
for i in range(50):
print(f.readline().strip())
# == pyFAI calibration ==
# Distance Sample to Detector: 0.118220810284 m
# PONI: 5.395e-02, 5.552e-02 m
# Rotations: 0.006974 -0.003313 -0.000000 rad
#
# == Fit2d calibration ==
# Distance Sample-beamCenter: 118.224 mm
# Center: x=1066.839, y=1036.336 pix
# Tilt: 0.442 deg TiltPlanRot: -154.594 deg
#
# Detector Detector Spline= /users/kieffer/workspace-400/pyFAI/doc/source/usage/cookbook/F_K4320T_Cam43_30012013_distorsion.spline PixelSize= 5.168e-05, 5.126e-05 m
# Detector has a mask: False
# Detector has a dark current: False
# detector has a flat field: False
#
# Wavelength: 4.21714957131e-11 m
# Mask applied: False
# Dark current applied: False
# Flat field applied: False
# Polarization factor: None
# Normalization factor: 1.0
# --> integrated.dat
# 2th_deg I
1.668855e-02 2.503605e+00
5.006564e-02 2.749244e+00
8.344273e-02 2.114206e+00
1.168198e-01 2.755390e+00
1.501969e-01 2.890013e+00
1.835740e-01 2.658021e+00
2.169511e-01 2.523173e+00
2.503282e-01 2.822044e+00
2.837053e-01 2.732563e+00
3.170824e-01 2.823707e+00
3.504595e-01 2.872520e+00
3.838366e-01 2.791172e+00
4.172137e-01 3.029095e+00
4.505907e-01 3.121556e+00
4.839678e-01 3.091311e+00
5.173449e-01 3.026508e+00
5.507220e-01 3.019458e+00
5.840991e-01 3.050202e+00
6.174762e-01 2.866862e+00
6.508533e-01 3.191242e+00
6.842304e-01 3.102998e+00
7.176075e-01 2.914966e+00
7.509846e-01 3.085007e+00
7.843617e-01 3.014659e+00
8.177388e-01 2.898686e+00
8.511159e-01 3.031742e+00
8.844929e-01 3.119843e+00
Azimuthal regrouping using pyFAI¶
This option is similar to the integration but perfroms N-integration on various azimuthal angle (chi) sections of the space. It is also named “caking” in Fit2D.
The azimuthal regrouping of an image over 500 radial bins in 360 angular steps (of 1 degree) can be performed like this:
In [5]:
res2 = ai.integrate2d(img_array,
500, 360,
unit="r_mm",
filename="integrated.edf")
In [6]:
cake = fabio.open("integrated.edf")
print(cake.header)
print("cake:", type(cake.data), cake.data.shape, cake.data.dtype)
{
"{\nEDF_DataBlockID": "0.Image.Psd",
"EDF_BinarySize": "720000",
"EDF_HeaderSize": "1536",
"ByteOrder": "LowByteFirst",
"DataType": "FloatValue",
"Dim_1": "500",
"Dim_2": "360",
"Image": "0",
"HeaderID": "EH:000000:000000:000000",
"Size": "720000",
"Engine": "Detector Detector Spline= /users/kieffer/workspace-400/pyFAI/doc/source/usage/cookbook/F_K4320T_Cam43_30012013_distorsion.spline PixelSize= 5.168e-05, 5.126e-05 m Wavelength= 4.217150e-11m SampleDetDist= 1.182208e-01m PONI= 5.394843e-02, 5.551600e-02m rot1=0.006974 rot2= -0.003313 rot3= -0.000000 rad DirectBeamDist= 118.224mm Center: x=1066.839, y=1036.336 pix Tilt=0.442 deg tiltPlanRotation= -154.594 deg",
"detector": "Detector",
"pixel1": "5.1679e-05",
"pixel2": "5.1265e-05",
"splineFile": "/users/kieffer/workspace-400/pyFAI/doc/source/usage/cookbook/F_K4320T_Cam43_30012013_distorsion.spline",
"dist": "0.118220810284",
"poni1": "0.05394843456",
"poni2": "0.0555160034482",
"rot1": "0.00697431586749",
"rot2": "-0.00331252162112",
"rot3": "-4.98632051492e-10",
"wavelength": "4.21714957131e-11",
"r_mm_min": "0.07826394721632823",
"r_mm_max": "78.18567447975511",
"chi_min": "-179.50000495946867",
"chi_max": "179.50000495946867",
"has_mask_applied": "False",
"has_dark_correction": "False",
"has_flat_correction": "False",
"polarization_factor": "None",
"normalization_factor": "1.0"
}
cake: <class 'numpy.ndarray'> (360, 500) float32
From this it is trivial to perform a loop and integrate many images.
Attention: The AzimuthalIntegrator object (called ai here) is rather large and costly to initialize. The best practice is to crate it once and to use it many times, like this:
In [7]:
import glob, os
all_images = glob.glob("LaB6*.tif")
ai = pyFAI.load("LaB6_29.4keV.poni")
for one_image in all_images:
fimg = fabio.open(one_image)
dest = os.path.splitext(one_image)[0] + ".dat"
ai.integrate1d(fimg.data,
1000,
unit="2th_deg",
filename=dest)
Using some advanced feature of Jupyter Notebooks¶
Jupyter notebooks offer some advanced visualization features, especially when used with matplotlib and pyFAI. Unfortunately, the example shown hereafter will not work properly in normal Python scipts.
Initialization of the notebook for matplotlib integration:¶
In [8]:
%pylab nbagg
Populating the interactive namespace from numpy and matplotlib
/mntdirect/_scisoft/users/jupyter/jupy35/lib/python3.5/site-packages/IPython/core/magics/pylab.py:160: UserWarning: pylab import has clobbered these variables: ['f']
`%matplotlib` prevents importing * from pylab and numpy
"\n`%matplotlib` prevents importing * from pylab and numpy"
In [9]:
from pyFAI.gui import jupyter
In [10]:
### Visualzation of different types of results reviously calculated
In [11]:
jupyter.display(img.data, label=img.filename)
Out[11]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fc78ed65978>
In [12]:
jupyter.plot1d(res)
Out[12]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fc78ecbad30>
In [13]:
jupyter.plot2d(res2)
Out[13]:
<matplotlib.axes._subplots.AxesSubplot at 0x7fc78ec76160>
Side note.¶
If you have tried to reproduce this, you may have noticed a couple of unconstiencies:
- The image is called “tif” while the content is an “edf”.
- You did not get such a good calibration and your caked image has wavy lines
The first issue maybe corrected in uploading a properly converted image. The second issue is related to the spline which is wrong (flipped up-down)
Conclusion¶
This cookbook exapline the basic usage of pyFAI as a Python library for azimuthal integration and simple visualization in the Jupyter notebook.