pyFAI package
pyFAI
Package
- pyFAI.__init__.AzimuthalIntegrator(*args, **kwargs)
- pyFAI.__init__.benchmarks(*arg, **kwarg)
Run the integrated benchmarks.
See the documentation of pyFAI.benchmark.run_benchmark
- pyFAI.__init__.detector_factory(name, config=None)
Create a new detector.
- Parameters:
name (str) – name of a detector
config (dict) – configuration of the detector supporting dict or JSON representation.
- Returns:
an instance of the right detector, set-up if possible
- Return type:
- pyFAI.__init__.load(filename)
Load an azimuthal integrator from a filename description.
- Parameters:
filename (str) – name of the file to load
- Returns:
instance of Gerometry of AzimuthalIntegrator set-up with the parameter from the file.
- pyFAI.__init__.tests(deprecation=False)
Runs the test suite of the installed version
- Parameters:
deprecation – enable/disables deprecation warning in the tests
- pyFAI.__init__.use_opencl = True
Global configuration which allow to disable OpenCL programatically. It must be set before requesting any OpenCL modules.
import pyFAI pyFAI.use_opencl = False
azimuthalIntegrator
Module
- class pyFAI.azimuthalIntegrator.AzimuthalIntegrator(dist=1, poni1=0, poni2=0, rot1=0, rot2=0, rot3=0, pixel1=None, pixel2=None, splineFile=None, detector=None, wavelength=None, orientation=0)
Bases:
Geometry
This class is an azimuthal integrator based on P. Boesecke’s geometry and histogram algorithm by Manolo S. del Rio and V.A Sole
All geometry calculation are done in the Geometry class
main methods are:
>>> tth, I = ai.integrate1d(data, npt, unit="2th_deg") >>> q, I, sigma = ai.integrate1d(data, npt, unit="q_nm^-1", error_model="poisson") >>> regrouped = ai.integrate2d(data, npt_rad, npt_azim, unit="q_nm^-1")[0]
- DEFAULT_METHOD_1D = IntegrationMethod(1d int, full split, histogram, cython)
- DEFAULT_METHOD_2D = IntegrationMethod(2d int, full split, histogram, cython)
Fail-safe low-memory integrator
- USE_LEGACY_MASK_NORMALIZATION = True
If true, the Python engine integrator will normalize the mask to use the most frequent value of the mask as the non-masking value.
This behaviour is not consistant with other engines and is now deprecated. This flag will be turned off in the comming releases.
Turning off this flag force the user to provide a mask with 0 as non-masking value. And any non-zero as masking value (negative or positive value). A boolean mask is also accepted (True is the masking value).
- __init__(dist=1, poni1=0, poni2=0, rot1=0, rot2=0, rot3=0, pixel1=None, pixel2=None, splineFile=None, detector=None, wavelength=None, orientation=0)
- Parameters:
dist (float) – distance sample - detector plan (orthogonal distance, not along the beam), in meter.
poni1 (float) – coordinate of the point of normal incidence along the detector’s first dimension, in meter
poni2 (float) – coordinate of the point of normal incidence along the detector’s second dimension, in meter
rot1 (float) – first rotation from sample ref to detector’s ref, in radians
rot2 (float) – second rotation from sample ref to detector’s ref, in radians
rot3 (float) – third rotation from sample ref to detector’s ref, in radians
pixel1 (float) – Deprecated. Pixel size of the fist dimension of the detector, in meter. If both pixel1 and pixel2 are not None, detector pixel size is overwritten. Prefer defining the detector pixel size on the provided detector object. Prefer defining the detector pixel size on the provided detector object (
detector.pixel1 = 5e-6
).pixel2 (float) – Deprecated. Pixel size of the second dimension of the detector, in meter. If both pixel1 and pixel2 are not None, detector pixel size is overwritten. Prefer defining the detector pixel size on the provided detector object (
detector.pixel2 = 5e-6
).splineFile (str) – Deprecated. File containing the geometric distortion of the detector. If not None, pixel1 and pixel2 are ignored and detector spline is overwritten. Prefer defining the detector spline manually (
detector.splineFile = "file.spline"
).detector (str or pyFAI.Detector) – name of the detector or Detector instance. String description is deprecated. Prefer using the result of the detector factory:
pyFAI.detector_factory("eiger4m")
wavelength (float) – Wave length used in meter
orientation (int) – orientation of the detector, see pyFAI.detectors.orientation.Orientation
- create_mask(data, mask=None, dummy=None, delta_dummy=None, unit=None, radial_range=None, azimuth_range=None, mode='normal')
Combines various masks into another one.
- Parameters:
data (ndarray) – input array of data
mask (ndarray) – input mask (if none, self.mask is used)
dummy (float) – value of dead pixels
delta_dumy – precision of dummy pixels
mode (str) – can be “normal” or “numpy” (inverted) or “where” applied to the mask
- Returns:
the new mask
- Return type:
ndarray of bool
This method combine two masks (dynamic mask from data & dummy and mask) to generate a new one with the ‘or’ binary operation. One can adjust the level, with the dummy and the delta_dummy parameter, when you consider the data values needs to be masked out.
This method can work in two different mode:
“normal”: False for valid pixels, True for bad pixels
“numpy”: True for valid pixels, false for others
“where”: does a numpy.where on the “numpy” output
This method tries to accomodate various types of masks (like valid=0 & masked=-1, …)
Note for the developper: we use a lot of numpy.logical_or in this method, the out= argument allows to recycle buffers and save considerable time in allocating temporary arrays.
- dark_correction(data, dark=None)
Correct for Dark-current effects. If dark is not defined, correct for a dark set by “set_darkfiles”
- Parameters:
data – input ndarray with the image
dark – ndarray with dark noise or None
- Returns:
2tuple: corrected_data, dark_actually used (or None)
- property darkcurrent
- property darkfiles
- property empty
- flat_correction(data, flat=None)
Correct for flat field. If flat is not defined, correct for a flat set by “set_flatfiles”
- Parameters:
data – input ndarray with the image
flat – ndarray with flatfield or None for no correction
- Returns:
2tuple: corrected_data, flat_actually used (or None)
- property flatfield
- property flatfiles
- get_darkcurrent()
- get_empty()
- get_flatfield()
- guess_max_bins(redundancy=1, search_range=None, unit='q_nm^-1', radial_range=None, azimuth_range=None)
Guess the maximum number of bins, considering the excpected minimum redundancy:
- Parameters:
redundancy – minimum number of pixel per bin
search_range – the minimum and maximun number of bins to be considered
unit – the unit to be considered like “2th_deg” or “q_nm^-1”
radial_range – radial range to be considered, depends on unit !
azimuth_range – azimuthal range to be considered
- Returns:
the minimum bin number providing the provided redundancy
- guess_polarization(img, npt_rad=None, npt_azim=360, unit='2th_deg', method=('no', 'csr', 'cython'), target_rad=None)
Guess the polarization factor for the given image
For this one performs several integration with different polarization factors and take the one with the lowest std along the outer-most ring.
- Parameters:
img – diffraction image, preferable with beam-stop centered.
npt_rad – number of point in the radial dimension, can be guessed, better avoid oversampling.
npt_azim – number of point in the azimuthal dimension, 1 per degree is usually OK
unit – radial unit for the integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation). The default one is pretty optimal: no splitting, CSR for the speed of the integration
target_rad – position of the outer-most complete ring, can be guessed.
- Returns:
polarization factor (#, polarization angle)
- inpainting(data, mask, npt_rad=1024, npt_azim=512, unit='r_m', method='splitpixel', poissonian=False, grow_mask=3)
Re-invent the values of masked pixels
- Parameters:
data – input image as 2d numpy array
mask – masked out pixels array
npt_rad – number of radial points
npt_azim – number of azimuthal points
unit – unit to be used for integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
poissonian – If True, add some poisonian noise to the data to make then more realistic
grow_mask – grow mask in polar coordinated to accomodate pixel splitting algoritm
- Returns:
inpainting object which contains the restored image as .data
- integrate1d(data, npt, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method=('bbox', 'csr', 'cython'), unit=q_nm ^ -1, safe=True, normalization_factor=1.0, metadata=None)
Calculate the azimuthal integration (1d) of a 2D image.
Multi algorithm implementation (tries to be bullet proof), suitable for SAXS, WAXS, … and much more Takes extra care of normalization and performs proper variance propagation.
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt (int) – number of points in the output pattern
filename (str) – output filename in 2/3 column ascii format
correctSolidAngle (bool) – correct for solid angle of each pixel if True
variance (ndarray) – array containing the variance of the data.
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (min, max). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (min, max). Values outside the range are ignored.
mask (ndarray) – array with 0 for valid pixels, all other are masked (static mask)
dummy (float) – value for dead/masked pixels (dynamic mask)
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction, True for using the former correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (Unit) – Output units, can be “q_nm^-1” (default), “2th_deg”, “r_mm” for now.
safe (bool) – Perform some extra checks to ensure LUT/CSR is still valid. False is faster.
normalization_factor (float) – Value of a normalization monitor
metadata – JSON serializable object containing the metadata, usually a dictionary.
- Returns:
Integrate1dResult namedtuple with (q,I,sigma) +extra informations in it.
- integrate1d_legacy(data, npt, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method='csr', unit=q_nm ^ -1, safe=True, normalization_factor=1.0, block_size=None, profile=False, metadata=None)
Calculate the azimuthal integrated Saxs curve in q(nm^-1) by default
Multi algorithm implementation (tries to be bullet proof), suitable for SAXS, WAXS, … and much more
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt (int) – number of points in the output pattern
filename (str) – output filename in 2/3 column ascii format
correctSolidAngle (bool) – correct for solid angle of each pixel if True
variance (ndarray) – array containing the variance of the data. If not available, no error propagation is done
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
mask (ndarray) – array (same size as image) with 1 for masked pixels, and 0 for valid pixels
dummy (float) – value for dead/masked pixels
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction, True for using the former correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (can be Method named tuple, IntegrationMethod instance or str to be parsed) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (pyFAI.units.Unit) – Output units, can be “q_nm^-1”, “q_A^-1”, “2th_deg”, “2th_rad”, “r_mm” for now
safe (bool) – Do some extra checks to ensure LUT/CSR is still valid. False is faster.
normalization_factor (float) – Value of a normalization monitor
block_size – size of the block for OpenCL integration (unused?)
profile – set to True to enable profiling in OpenCL
all (bool) – if true return a dictionary with many more parameters (deprecated, please refer to the documentation of Integrate1dResult).
metadata – JSON serializable object containing the metadata, usually a dictionary.
- Returns:
q/2th/r bins center positions and regrouped intensity (and error array if variance or variance model provided)
- Return type:
Integrate1dResult, dict
- integrate1d_ng(data, npt, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method=('bbox', 'csr', 'cython'), unit=q_nm ^ -1, safe=True, normalization_factor=1.0, metadata=None)
Calculate the azimuthal integration (1d) of a 2D image.
Multi algorithm implementation (tries to be bullet proof), suitable for SAXS, WAXS, … and much more Takes extra care of normalization and performs proper variance propagation.
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt (int) – number of points in the output pattern
filename (str) – output filename in 2/3 column ascii format
correctSolidAngle (bool) – correct for solid angle of each pixel if True
variance (ndarray) – array containing the variance of the data.
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (min, max). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (min, max). Values outside the range are ignored.
mask (ndarray) – array with 0 for valid pixels, all other are masked (static mask)
dummy (float) – value for dead/masked pixels (dynamic mask)
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction, True for using the former correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (Unit) – Output units, can be “q_nm^-1” (default), “2th_deg”, “r_mm” for now.
safe (bool) – Perform some extra checks to ensure LUT/CSR is still valid. False is faster.
normalization_factor (float) – Value of a normalization monitor
metadata – JSON serializable object containing the metadata, usually a dictionary.
- Returns:
Integrate1dResult namedtuple with (q,I,sigma) +extra informations in it.
- integrate2d(data, npt_rad, npt_azim=360, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method=('bbox', 'csr', 'cython'), unit=q_nm ^ -1, safe=True, normalization_factor=1.0, metadata=None)
Calculate the azimuthal regrouped 2d image in q(nm^-1)/chi(deg) by default
Multi algorithm implementation (tries to be bullet proof)
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt_rad (int) – number of points in the radial direction
npt_azim (int) – number of points in the azimuthal direction
filename (str) – output image (as edf format)
correctSolidAngle (bool) – correct for solid angle of each pixel if True
variance (ndarray) – array containing the variance of the data. If not available, no error propagation is done
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
mask (ndarray) – array (same size as image) with 1 for masked pixels, and 0 for valid pixels
dummy (float) – value for dead/masked pixels
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (str) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (pyFAI.units.Unit) – Output units, can be “q_nm^-1”, “q_A^-1”, “2th_deg”, “2th_rad”, “r_mm” for anything defined as pyFAI.units.RADIAL_UNITS can also be a 2-tuple of (RADIAL_UNITS, AZIMUTHAL_UNITS) (advanced usage)
safe (bool) – Do some extra checks to ensure LUT is still valid. False is faster.
normalization_factor (float) – Value of a normalization monitor
metadata – JSON serializable object containing the metadata, usually a dictionary.
- Returns:
azimuthaly regrouped intensity, q/2theta/r pos. and chi pos.
- Return type:
Integrate2dResult, dict
- integrate2d_legacy(data, npt_rad, npt_azim=360, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method=None, unit=q_nm ^ -1, safe=True, normalization_factor=1.0, metadata=None)
Calculate the azimuthal regrouped 2d image in q(nm^-1)/chi(deg) by default
Multi algorithm implementation (tries to be bullet proof)
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt_rad (int) – number of points in the radial direction
npt_azim (int) – number of points in the azimuthal direction
filename (str) – output image (as edf format)
correctSolidAngle (bool) – correct for solid angle of each pixel if True
variance (ndarray) – array containing the variance of the data. If not available, no error propagation is done
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
mask (ndarray) – array (same size as image) with 1 for masked pixels, and 0 for valid pixels
dummy (float) – value for dead/masked pixels
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (pyFAI.units.Unit) – Output units, can be “q_nm^-1”, “q_A^-1”, “2th_deg”, “2th_rad”, “r_mm” for now
safe (bool) – Do some extra checks to ensure LUT is still valid. False is faster.
normalization_factor (float) – Value of a normalization monitor
all (bool) – if true, return many more intermediate results as a dict (deprecated, please refer to the documentation of Integrate2dResult).
metadata – JSON serializable object containing the metadata, usually a dictionary.
- Returns:
azimuthaly regrouped intensity, q/2theta/r pos. and chi pos.
- Return type:
Integrate2dResult, dict
- integrate2d_ng(data, npt_rad, npt_azim=360, filename=None, correctSolidAngle=True, variance=None, error_model=None, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method=('bbox', 'csr', 'cython'), unit=q_nm ^ -1, safe=True, normalization_factor=1.0, metadata=None)
Calculate the azimuthal regrouped 2d image in q(nm^-1)/chi(deg) by default
Multi algorithm implementation (tries to be bullet proof)
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt_rad (int) – number of points in the radial direction
npt_azim (int) – number of points in the azimuthal direction
filename (str) – output image (as edf format)
correctSolidAngle (bool) – correct for solid angle of each pixel if True
variance (ndarray) – array containing the variance of the data. If not available, no error propagation is done
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
mask (ndarray) – array (same size as image) with 1 for masked pixels, and 0 for valid pixels
dummy (float) – value for dead/masked pixels
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (str) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (pyFAI.units.Unit) – Output units, can be “q_nm^-1”, “q_A^-1”, “2th_deg”, “2th_rad”, “r_mm” for anything defined as pyFAI.units.RADIAL_UNITS can also be a 2-tuple of (RADIAL_UNITS, AZIMUTHAL_UNITS) (advanced usage)
safe (bool) – Do some extra checks to ensure LUT is still valid. False is faster.
normalization_factor (float) – Value of a normalization monitor
metadata – JSON serializable object containing the metadata, usually a dictionary.
- Returns:
azimuthaly regrouped intensity, q/2theta/r pos. and chi pos.
- Return type:
Integrate2dResult, dict
- integrate_radial(data, npt, npt_rad=100, correctSolidAngle=True, radial_range=None, azimuth_range=None, mask=None, dummy=None, delta_dummy=None, polarization_factor=None, dark=None, flat=None, method=('bbox', 'csr', 'cython'), unit=chi_deg, radial_unit=q_nm ^ -1, normalization_factor=1.0)
Calculate the radial integrated profile curve as I = f(chi)
- Parameters:
data (ndarray) – 2D array from the Detector/CCD camera
npt (int) – number of points in the output pattern
npt_rad (int) – number of points in the radial space. Too few points may lead to huge rounding errors.
filename (str) – output filename in 2/3 column ascii format
correctSolidAngle (bool) – correct for solid angle of each pixel if True
radial_range (Tuple(float, float)) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored. Optional.
azimuth_range (Tuple(float, float)) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored. Optional.
mask (ndarray) – array (same size as image) with 1 for masked pixels, and 0 for valid pixels
dummy (float) – value for dead/masked pixels
delta_dummy (float) – precision for dummy value
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). * 0 for circular polarization or random, * None for no correction, * True for using the former correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
unit (pyFAI.units.Unit) – Output units, can be “chi_deg” or “chi_rad”
radial_unit (pyFAI.units.Unit) – unit used for radial representation, can be “q_nm^-1”, “q_A^-1”, “2th_deg”, “2th_rad”, “r_mm” for now
normalization_factor (float) – Value of a normalization monitor
- Returns:
chi bins center positions and regrouped intensity
- Return type:
- medfilt1d(data, npt_rad=1024, npt_azim=512, correctSolidAngle=True, radial_range=None, azimuth_range=None, polarization_factor=None, dark=None, flat=None, method='splitpixel', unit=q_nm ^ -1, percentile=50, dummy=None, delta_dummy=None, mask=None, normalization_factor=1.0, metadata=None)
Perform the 2D integration and filter along each row using a median filter
- Parameters:
data – input image as numpy array
npt_rad – number of radial points
npt_azim – number of azimuthal points
correctSolidAngle (bool) – correct for solid angle of each pixel if True
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
polarization_factor (float) – polarization factor between -1 (vertical) and +1 (horizontal). 0 for circular polarization or random, None for no correction, True for using the former correction
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
unit – unit to be used for integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
percentile – which percentile use for cutting out percentil can be a 2-tuple to specify a region to average out
mask – masked out pixels array
normalization_factor (float) – Value of a normalization monitor
metadata (JSON serializable dict) – any other metadata,
- Returns:
Integrate1D like result like
- reset(collect_garbage=True)
Reset azimuthal integrator in addition to other arrays.
- Parameters:
collect_garbage – set to False to prevent garbage collection, faster
- reset_engines(collect_garbage=True)
Urgently free memory by deleting all regrid-engines
- Parameters:
collect_garbage – set to False to prevent garbage collection, faster
- save1D(filename, dim1, I, error=None, dim1_unit=2th_deg, has_dark=False, has_flat=False, polarization_factor=None, normalization_factor=None)
This method save the result of a 1D integration.
Deprecated on 13/06/2017
- Parameters:
filename (str) – the filename used to save the 1D integration
dim1 (numpy.ndarray) – the x coordinates of the integrated curve
I (numpy.mdarray) – The integrated intensity
error (numpy.ndarray or None) – the error bar for each intensity
dim1_unit (pyFAI.units.Unit) – the unit of the dim1 array
has_dark (bool) – save the darks filenames (default: no)
has_flat (bool) – save the flat filenames (default: no)
polarization_factor (float) – the polarization factor
normalization_factor (float) – the monitor value
- save2D(filename, I, dim1, dim2, error=None, dim1_unit=2th_deg, has_dark=False, has_flat=False, polarization_factor=None, normalization_factor=None)
This method save the result of a 2D integration.
Deprecated on 13/06/2017
- Parameters:
filename (str) – the filename used to save the 2D histogram
dim1 (numpy.ndarray) – the 1st coordinates of the histogram
dim1 – the 2nd coordinates of the histogram
I (numpy.mdarray) – The integrated intensity
error (numpy.ndarray or None) – the error bar for each intensity
dim1_unit (pyFAI.units.Unit) – the unit of the dim1 array
has_dark (bool) – save the darks filenames (default: no)
has_flat (bool) – save the flat filenames (default: no)
polarization_factor (float) – the polarization factor
normalization_factor (float) – the monitor value
- separate(data, npt_rad=1024, npt_azim=512, unit='2th_deg', method='splitpixel', percentile=50, mask=None, restore_mask=True)
Separate bragg signal from powder/amorphous signal using azimuthal integration, median filering and projected back before subtraction.
- Parameters:
data – input image as numpy array
npt_rad – number of radial points
npt_azim – number of azimuthal points
unit – unit to be used for integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
percentile – which percentile use for cutting out
mask – masked out pixels array
restore_mask – masked pixels have the same value as input data provided
- Returns:
SeparateResult which the bragg & amorphous signal
Note: the filtered 1D spectrum can be retrieved from SeparateResult.radial and SeparateResult.intensity
- set_darkcurrent(dark)
- set_darkfiles(files=None, method='mean')
Set the dark current from one or mutliple files, avaraged according to the method provided.
Moved to Detector.
- Parameters:
files (str or list(str) or None) – file(s) used to compute the dark.
method (str) – method used to compute the dark, “mean” or “median”
- set_empty(value)
- set_flatfield(flat)
- set_flatfiles(files, method='mean')
Set the flat field from one or mutliple files, averaged according to the method provided.
Moved to Detector.
- Parameters:
files (str or list(str) or None) – file(s) used to compute the flat-field.
method (str) – method used to compute the dark, “mean” or “median”
- setup_CSR(shape, npt, mask=None, pos0_range=None, pos1_range=None, mask_checksum=None, unit=2th_deg, split='bbox', empty=None, scale=True)
See documentation of setup_sparse_integrator where algo=CSR
- setup_LUT(shape, npt, mask=None, pos0_range=None, pos1_range=None, mask_checksum=None, unit=2th_deg, split='bbox', empty=None, scale=True)
See documentation of setup_sparse_integrator where algo=LUT
- setup_sparse_integrator(shape, npt, mask=None, pos0_range=None, pos1_range=None, mask_checksum=None, unit=2th_deg, split='bbox', algo='CSR', empty=None, scale=True)
Prepare a sparse-matrix integrator based on LUT, CSR or CSC format
- Parameters:
shape ((int, int)) – shape of the dataset
npt (int or (int, int)) – number of points in the the output pattern
mask (ndarray) – array with masked pixel (1=masked)
pos0_range ((float, float)) – range in radial dimension
pos1_range ((float, float)) – range in azimuthal dimension
mask_checksum (int (or anything else ...)) – checksum of the mask buffer
unit (pyFAI.units.Unit or 2-tuple of them for 2D integration) – use to propagate the LUT object for further checkings
split – Splitting scheme: valid options are “no”, “bbox”, “full”
algo – Sparse matrix format to use: “LUT”, “CSR” or “CSC”
empty – override the default empty value
scale – set to False for working in S.I. units for pos0_range which is faster. By default assumes pos0_range has units Note that pos1_range, the chi-angle, is expected in radians
This method is called when a look-up table needs to be set-up. The shape parameter, correspond to the shape of the original datatset. It is possible to customize the number of point of the output histogram with the npt parameter which can be either an integer for an 1D integration or a 2-tuple of integer in case of a 2D integration. The LUT will have a different shape: (npt, lut_max_size), the later parameter being calculated during the instanciation of the splitBBoxLUT class.
It is possible to prepare the LUT with a predefine mask. This operation can speedup the computation of the later integrations. Instead of applying the patch on the dataset, it is taken into account during the histogram computation. If provided the mask_checksum prevent the re-calculation of the mask. When the mask changes, its checksum is used to reset (or not) the LUT (which is a very time consuming operation !)
It is also possible to restrain the range of the 1D or 2D pattern with the pos0_range (radial) and pos1_range (azimuthal).
The unit parameter is just propagated to the LUT integrator for further checkings: The aim is to prevent an integration to be performed in 2th-space when the LUT was setup in q space. Unit can also be a 2-tuple in the case of a 2D integration
- sigma_clip(data, npt=1024, correctSolidAngle=True, polarization_factor=None, variance=None, error_model=ErrorModel.NO, radial_range=None, azimuth_range=None, dark=None, flat=None, method=('no', 'csr', 'cython'), unit=q_nm ^ -1, thres=5.0, max_iter=5, dummy=None, delta_dummy=None, mask=None, normalization_factor=1.0, metadata=None, safe=True, **kwargs)
Performs iteratively the 1D integration with variance propagation and performs a sigm-clipping at each iteration, i.e. all pixel which intensity differs more than thres*std is discarded for next iteration.
Keep only pixels with intensty:
|I - <I>| < thres * σ(I)
This enforces a symmetric, bell-shaped distibution (i.e. gaussian-like) and is very good at extracting background or amorphous isotropic scattering out of Bragg peaks.
- Parameters:
data – input image as numpy array
npt_rad – number of radial points
correctSolidAngle (bool) – correct for solid angle of each pixel if True
polarization_factor (float) – polarization factor between: -1 (vertical) +1 (horizontal). - 0 for circular polarization or random, - None for no correction, - True for using the former correction
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
variance (ndarray) – the variance of the signal
error_model (str) – can be “poisson” to assume a poissonian detector (variance=I) or “azimuthal” to take the std² in each ring (better, more expenive)
unit – unit to be used for integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
thres – cut-off for n*sigma: discard any values with (I-<I>)/sigma > thres.
max_iter – maximum number of iterations
mask – masked out pixels array
normalization_factor (float) – Value of a normalization monitor
metadata (JSON serializable dict) – any other metadata,
safe – set to False to skip some tests
- Returns:
Integrate1D like result like
The difference with the previous sigma_clip_legacy implementation is that there is no 2D regrouping. Pixel splitting should be avoided with this implementation. The standard deviation is usually smaller than previously and the signal cleaner. It is also slightly faster.
The case neither error_model, nor variance is provided, fall-back on a poissonian model.
- sigma_clip_legacy(data, npt_rad=1024, npt_azim=512, correctSolidAngle=True, polarization_factor=None, radial_range=None, azimuth_range=None, dark=None, flat=None, method=('full', 'histogram', 'cython'), unit=q_nm ^ -1, thres=3, max_iter=5, dummy=None, delta_dummy=None, mask=None, normalization_factor=1.0, metadata=None, safe=True, **kwargs)
Perform first a 2D integration and then an iterative sigma-clipping filter along each row. See the doc of scipy.stats.sigmaclip for the options thres and max_iter.
- Parameters:
data – input image as numpy array
npt_rad – number of radial points (alias: npt)
npt_azim – number of azimuthal points
correctSolidAngle (bool) – correct for solid angle of each pixel when set
polarization_factor (float) –
polarization factor between -1 (vertical) and +1 (horizontal).
0 for circular polarization or random,
None for no correction,
True for using the former correction
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
unit – unit to be used for integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
thres – cut-off for n*sigma: discard any values with |I-<I>| > thres*σ. The threshold can be a 2-tuple with sigma_low and sigma_high.
max_iter – maximum number of iterations
mask – masked out pixels array
normalization_factor (float) – Value of a normalization monitor
metadata (JSON serializable dict) – any other metadata,
safe – unset to save some checks on sparse matrix shape/content.
- Kwargs:
unused, just for signature compatibility when used within Worker.
- Returns:
Integrate1D like result like
Nota: The initial 2D-integration requires pixel splitting
- sigma_clip_ng(data, npt=1024, correctSolidAngle=True, polarization_factor=None, variance=None, error_model=ErrorModel.NO, radial_range=None, azimuth_range=None, dark=None, flat=None, method=('no', 'csr', 'cython'), unit=q_nm ^ -1, thres=5.0, max_iter=5, dummy=None, delta_dummy=None, mask=None, normalization_factor=1.0, metadata=None, safe=True, **kwargs)
Performs iteratively the 1D integration with variance propagation and performs a sigm-clipping at each iteration, i.e. all pixel which intensity differs more than thres*std is discarded for next iteration.
Keep only pixels with intensty:
|I - <I>| < thres * σ(I)
This enforces a symmetric, bell-shaped distibution (i.e. gaussian-like) and is very good at extracting background or amorphous isotropic scattering out of Bragg peaks.
- Parameters:
data – input image as numpy array
npt_rad – number of radial points
correctSolidAngle (bool) – correct for solid angle of each pixel if True
polarization_factor (float) – polarization factor between: -1 (vertical) +1 (horizontal). - 0 for circular polarization or random, - None for no correction, - True for using the former correction
radial_range ((float, float), optional) – The lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
azimuth_range ((float, float), optional) – The lower and upper range of the azimuthal angle in degree. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.
dark (ndarray) – dark noise image
flat (ndarray) – flat field image
variance (ndarray) – the variance of the signal
error_model (str) – can be “poisson” to assume a poissonian detector (variance=I) or “azimuthal” to take the std² in each ring (better, more expenive)
unit – unit to be used for integration
method (IntegrationMethod) – IntegrationMethod instance or 3-tuple with (splitting, algorithm, implementation)
thres – cut-off for n*sigma: discard any values with (I-<I>)/sigma > thres.
max_iter – maximum number of iterations
mask – masked out pixels array
normalization_factor (float) – Value of a normalization monitor
metadata (JSON serializable dict) – any other metadata,
safe – set to False to skip some tests
- Returns:
Integrate1D like result like
The difference with the previous sigma_clip_legacy implementation is that there is no 2D regrouping. Pixel splitting should be avoided with this implementation. The standard deviation is usually smaller than previously and the signal cleaner. It is also slightly faster.
The case neither error_model, nor variance is provided, fall-back on a poissonian model.
average
Module
- exception pyFAI.average.AlgorithmCreationError
Bases:
RuntimeError
Exception returned if creation of an ImageReductionFilter is not possible
- class pyFAI.average.Average
Bases:
object
Process images to generate an average using different algorithms.
- __init__()
Constructor
- add_algorithm(algorithm)
Defines another algorithm which will be computed on the source.
- Parameters:
algorithm (ImageReductionFilter) – An averaging algorithm.
- get_counter_frames()
Returns the number of frames used for the process.
- Return type:
int
- get_fabio_images()
Returns source images as fabio images.
- Return type:
list(fabio.fabioimage.FabioImage)
- get_image_reduction(algorithm)
Returns the result of an algorithm. The process must be already done.
- Parameters:
algorithm (ImageReductionFilter) – An averaging algorithm
- Return type:
numpy.ndarray
- process()
Process source images to all defined averaging algorithms defined using defined parameters. To access to the results you have to define a writer (AverageWriter). To follow the process forward you have to define an observer (AverageObserver).
- set_correct_flat_from_dark(correct_flat_from_dark)
Defines if the dark must be applied on the flat.
- Parameters:
correct_flat_from_dark (bool) – If true, the dark is applied.
- set_dark(dark_list)
Defines images used as dark.
- Parameters:
dark_list (list) – List of dark used
- set_flat(flat_list)
Defines images used as flat.
- Parameters:
flat_list (list) – List of dark used
- set_images(image_list)
Defines the set set of source images to used to process an average.
- Parameters:
image_list (list) – List of filename, numpy arrays, fabio images used as source for the computation.
- set_monitor_name(monitor_name)
Defines the monitor name used to correct images before processing the average. This monitor must be part of the file header, else the image is skipped.
- Parameters:
monitor_name (str) – Name of the monitor available on the header file
- set_observer(observer)
Set an observer to the average process.
- Parameters:
observer (AverageObserver) – An observer
- set_pixel_filter(threshold, minimum, maximum)
Defines the filter applied on each pixels of the images before processing the average.
- Parameters:
threshold – what is the upper limit? all pixel > max*(1-threshold) are discarded.
minimum – minimum valid value or True
maximum – maximum valid value
- set_writer(writer)
Defines the object write which will be used to store the result.
- Parameters:
writer (AverageWriter) – The writer to use.
- class pyFAI.average.AverageDarkFilter(filter_name, cut_off, quantiles)
Bases:
ImageStackFilter
Filter based on the algorithm of average_dark
TODO: Must be split according to each filter_name, and removed
- __init__(filter_name, cut_off, quantiles)
- get_parameters()
Return a dictionary containing filter parameters
- property name
- class pyFAI.average.AverageObserver
Bases:
object
- algorithm_finished(algorithm)
Called when an algorithm is finished
- algorithm_started(algorithm)
Called when an algorithm is started
- frame_processed(algorithm, frame_index, frames_count)
Called after providing a frame to an algorithm
- image_loaded(fabio_image, image_index, images_count)
Called when an input image is loaded
- process_finished()
Called when the full process is finished
- process_started()
Called when the full processing is started
- result_processing(algorithm)
Called before the result of an algorithm is computed
- class pyFAI.average.AverageWriter
Bases:
object
Interface for using writer in Average process.
- close()
Close the writer. Must not be used anymore.
- write_header(merged_files, nb_frames, monitor_name)
Write the header of the average
- Parameters:
merged_files (list) – List of files used to generate this output
nb_frames (int) – Number of frames used
monitor_name (str) – Name of the monitor used. Can be None.
- write_reduction(algorithm, data)
Write one reduction
- Parameters:
algorithm (ImageReductionFilter) – Algorithm used
data (object) – Data of this reduction
- class pyFAI.average.ImageAccumulatorFilter
Bases:
ImageReductionFilter
Filter applied in a set of images in which it is possible to reduce data step by step into a single merged image.
- add_image(image)
Add an image to the filter.
- Parameters:
image (numpy.ndarray) – image to add
- get_result()
Get the result of the filter.
- Returns:
result filter
- Return type:
numpy.ndarray
- init(max_images=None)
Initialize the filter before using it.
- Parameters:
max_images (int) – Max images supported by the filter
- class pyFAI.average.ImageReductionFilter
Bases:
object
Generic filter applied in a set of images.
- add_image(image)
Add an image to the filter.
- Parameters:
image (numpy.ndarray) – image to add
- get_parameters()
Return a dictionary containing filter parameters
- Return type:
dict
- get_result()
Get the result of the filter.
- Returns:
result filter
- init(max_images=None)
Initialize the filter before using it.
- Parameters:
max_images (int) – Max images supported by the filter
- class pyFAI.average.ImageStackFilter
Bases:
ImageReductionFilter
Filter creating a stack from all images and computing everything at the end.
- add_image(image)
Add an image to the filter.
- Parameters:
image (numpy.ndarray) – image to add
- get_result()
Get the result of the filter.
- Returns:
result filter
- init(max_images=None)
Initialize the filter before using it.
- Parameters:
max_images (int) – Max images supported by the filter
- class pyFAI.average.MaxAveraging
Bases:
ImageAccumulatorFilter
- name = 'max'
- class pyFAI.average.MeanAveraging
Bases:
SumAveraging
- get_result()
Get the result of the filter.
- Returns:
result filter
- Return type:
numpy.ndarray
- name = 'mean'
- class pyFAI.average.MinAveraging
Bases:
ImageAccumulatorFilter
- name = 'min'
- class pyFAI.average.MultiFilesAverageWriter(file_name_pattern, file_format, dry_run=False)
Bases:
AverageWriter
Write reductions into multi files. File headers are duplicated.
- __init__(file_name_pattern, file_format, dry_run=False)
- Parameters:
file_name_pattern (str) – File name pattern for the output files. If it contains “{method_name}”, it is updated for each reduction writing with the name of the reduction.
file_format (str) – File format used. It is the default extension file.
dry_run (bool) – If dry_run, the file is created on memory but not saved on the file system at the end
- close()
Close the writer. Must not be used anymore.
- get_fabio_image(algorithm)
Get the constructed fabio image
- Return type:
fabio.fabioimage.FabioImage
- write_header(merged_files, nb_frames, monitor_name)
Write the header of the average
- Parameters:
merged_files (list) – List of files used to generate this output
nb_frames (int) – Number of frames used
monitor_name (str) – Name of the monitor used. Can be None.
- write_reduction(algorithm, data)
Write one reduction
- Parameters:
algorithm (ImageReductionFilter) – Algorithm used
data (object) – Data of this reduction
- class pyFAI.average.SumAveraging
Bases:
ImageAccumulatorFilter
- name = 'sum'
- pyFAI.average.average_dark(lstimg, center_method='mean', cutoff=None, quantiles=(0.5, 0.5))
Averages a series of dark (or flat) images. Centers the result on the mean or the median … but averages all frames within cutoff*std
- Parameters:
lstimg – list of 2D images or a 3D stack
center_method (str) – is the center calculated by a “mean”, “median”, “quantile”, “std”
cutoff (float or None) – keep all data where (I-center)/std < cutoff
quantiles (tuple(float, float) or None) – 2-tuple of floats average out data between the two quantiles
- Returns:
2D image averaged
- pyFAI.average.average_images(listImages, output=None, threshold=0.1, minimum=None, maximum=None, darks=None, flats=None, filter_='mean', correct_flat_from_dark=False, cutoff=None, quantiles=None, fformat='edf', monitor_key=None)
- Takes a list of filenames and create an average frame discarding all
saturated pixels.
- Parameters:
listImages – list of string representing the filenames
output – name of the optional output file
threshold – what is the upper limit? all pixel > max*(1-threshold) are discarded.
minimum – minimum valid value or True
maximum – maximum valid value
darks – list of dark current images for subtraction
flats – list of flat field images for division
filter – can be “min”, “max”, “median”, “mean”, “sum”, “quantiles” (default=’mean’)
correct_flat_from_dark – shall the flat be re-corrected ?
cutoff – keep all data where (I-center)/std < cutoff
quantiles – 2-tuple containing the lower and upper quantile (0<q<1) to average out.
fformat – file format of the output image, default: edf
str (monitor_key) – Key containing the monitor. Can be none.
- Returns:
filename with the data or the data ndarray in case format=None
- pyFAI.average.bounding_box(img)
Tries to guess the bounding box around a valid massif
- Parameters:
img – 2D array like
- Returns:
4-tuple (d0_min, d1_min, d0_max, d1_max)
- pyFAI.average.common_prefix(string_list)
Return the common prefix of a list of strings
TODO: move it into utils package
- Parameters:
string_list (list(str)) – List of strings
- Return type:
str
- pyFAI.average.create_algorithm(filter_name, cut_off=None, quantiles=None)
Factory to create algorithm according to parameters
- Parameters:
cutoff (float or None) – keep all data where (I-center)/std < cutoff
quantiles (tuple(float, float) or None) – 2-tuple of floats average out data between the two quantiles
- Returns:
An algorithm
- Return type:
- Raises:
AlgorithmCreationError – If it is not possible to create the algorithm
- pyFAI.average.is_algorithm_name_exists(filter_name)
Return true if the name is a name of a filter algorithm
- pyFAI.average.remove_saturated_pixel(ds, threshold=0.1, minimum=None, maximum=None)
Remove saturated fixes from an array in place.
- Parameters:
ds – a dataset as ndarray
threshold (float) – what is the upper limit? all pixel > max*(1-threshold) are discarded.
minimum (float) – minimum valid value (or True for auto-guess)
maximum (float) – maximum valid value
- Returns:
the input dataset
multi_geometry
Module
Module for treating simultaneously multiple detector configuration within a single integration
- class pyFAI.multi_geometry.MultiGeometry(ais, unit='2th_deg', radial_range=(0, 180), azimuth_range=None, wavelength=None, empty=0.0, chi_disc=180, threadpoolsize=12)
Bases:
object
This is an Azimuthal integrator containing multiple geometries, for example when the detector is on a goniometer arm
- __init__(ais, unit='2th_deg', radial_range=(0, 180), azimuth_range=None, wavelength=None, empty=0.0, chi_disc=180, threadpoolsize=12)
Constructor of the multi-geometry integrator
- Parameters:
ais – list of azimuthal integrators
radial_range – common range for integration
azimuthal_range – (2-tuple) common azimuthal range for integration
empty – value for empty pixels
chi_disc – if 0, set the chi_discontinuity at 0, else π
threadpoolsize – By default, use a thread-pool to parallelize histogram/CSC integrator over as many threads as cores, set to False/0 to serialize
- integrate1d(lst_data, npt=1800, correctSolidAngle=True, lst_variance=None, error_model=None, polarization_factor=None, normalization_factor=None, lst_mask=None, lst_flat=None, method=('full', 'histogram', 'cython'))
Perform 1D azimuthal integration
- Parameters:
lst_data – list of numpy array
npt – number of points int the integration
correctSolidAngle – correct for solid angle (all processing are then done in absolute solid angle !)
lst_variance (list of ndarray) – list of array containing the variance of the data. If not available, no error propagation is done
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
polarization_factor – Apply polarization correction ? is None: not applies. Else provide a value from -1 to +1
normalization_factor – normalization monitors value (list of floats)
all – return a dict with all information in it (deprecated, please refer to the documentation of Integrate1dResult).
lst_mask – numpy.Array or list of numpy.array which mask the lst_data.
lst_flat – numpy.Array or list of numpy.array which flat the lst_data.
method – integration method, a string or a registered method
- Returns:
2th/I or a dict with everything depending on “all”
- Return type:
Integrate1dResult, dict
- integrate2d(lst_data, npt_rad=1800, npt_azim=3600, correctSolidAngle=True, lst_variance=None, error_model=None, polarization_factor=None, normalization_factor=None, lst_mask=None, lst_flat=None, method=('full', 'histogram', 'cython'))
Performs 2D azimuthal integration of multiples frames, one for each geometry
- Parameters:
lst_data – list of numpy array
npt – number of points int the integration
correctSolidAngle – correct for solid angle (all processing are then done in absolute solid angle !)
lst_variance (list of ndarray) – list of array containing the variance of the data. If not available, no error propagation is done
error_model (str) – When the variance is unknown, an error model can be given: “poisson” (variance = I), “azimuthal” (variance = (I-<I>)^2)
polarization_factor – Apply polarization correction ? is None: not applies. Else provide a value from -1 to +1
normalization_factor – normalization monitors value (list of floats)
all – return a dict with all information in it (deprecated, please refer to the documentation of Integrate2dResult).
lst_mask – numpy.Array or list of numpy.array which mask the lst_data.
lst_flat – numpy.Array or list of numpy.array which flat the lst_data.
method – integration method (or its name)
- Returns:
I/2th/chi or a dict with everything depending on “all”
- Return type:
Integrate2dResult, dict
- reset(collect_garbage=True)
Clean up all caches for all integrators
- Parameters:
collect_garbage – set to False to prevent garbage collection, faster
- set_wavelength(value)
Changes the wavelength of a group of azimuthal integrators
geometryRefinement
Module
Module used to perform the geometric refinement of the model
- class pyFAI.geometryRefinement.GeometryRefinement(data=None, calibrant=None, dist=1, poni1=None, poni2=None, rot1=0, rot2=0, rot3=0, pixel1=None, pixel2=None, splineFile=None, detector=None, wavelength=None, **kwargs)
Bases:
AzimuthalIntegrator
- PARAM_ORDER = ('dist', 'poni1', 'poni2', 'rot1', 'rot2', 'rot3', 'wavelength')
- __init__(data=None, calibrant=None, dist=1, poni1=None, poni2=None, rot1=0, rot2=0, rot3=0, pixel1=None, pixel2=None, splineFile=None, detector=None, wavelength=None, **kwargs)
- Parameters:
data – ndarray float64 shape = n, 3 col0: pos in dim0 (in pixels) col1: pos in dim1 (in pixels) col2: ring index in calibrant object
calibrant – instance of pyFAI.calibrant.Calibrant containing the d-Spacing
dist – guessed sample-detector distance (optional, in m)
poni1 – guessed PONI coordinate along the Y axis (optional, in m)
poni2 – guessed PONI coordinate along the X axis (optional, in m)
rot1 – guessed tilt of the detector around the Y axis (optional, in rad)
rot2 – guessed tilt of the detector around the X axis (optional, in rad)
rot3 – guessed tilt of the detector around the incoming beam axis (optional, in rad)
pixel1 – Pixel size along the vertical direction of the detector (in m), almost mandatory
pixel2 – Pixel size along the horizontal direction of the detector (in m), almost mandatory
splineFile – file describing the detector as 2 cubic splines. Replaces pixel1 & pixel2
detector – name of the detector or Detector instance. Replaces splineFile, pixel1 & pixel2
wavelength – wavelength in m (1.54e-10)
- anneal(maxiter=1000000)
- calc_2th(rings, wavelength=None)
- Parameters:
rings – indices of the rings. starts at 0 and self.dSpacing should be long enough !!!
wavelength – wavelength in meter
- calc_param7(param, free, const)
Calculate the “legacy” 6/7 parameters from a number of free and fixed parameters
- chi2(param=None)
- chi2_wavelength(param=None)
- confidence(with_rot=True)
Confidence interval obtained from the second derivative of the error function next to its minimum value.
Note the confidence interval increases with the number of points which is “surprizing”
- Parameters:
with_rot – if true include rot1 & rot2 in the parameter set.
- Returns:
std_dev, confidence
- curve_fit(with_rot=True)
Refine the geometry and provide confidence interval Use curve_fit from scipy.optimize to not only refine the geometry (unconstrained fit)
- Parameters:
with_rot – include rotation intro error measurment
- Returns:
std_dev, confidence
- property dist_max
- property dist_min
- get_dist_max()
- get_dist_min()
- get_poni1_max()
- get_poni1_min()
- get_poni2_max()
- get_poni2_min()
- get_rot1_max()
- get_rot1_min()
- get_rot2_max()
- get_rot2_min()
- get_rot3_max()
- get_rot3_min()
- get_wavelength_max()
- get_wavelength_min()
- guess_poni(fixed=None)
PONI can be guessed by the centroid of the ring with lowest 2Theta
It may try to fit an ellipse and sometimes it works
- property poni1_max
- property poni1_min
- property poni2_max
- property poni2_min
- refine1()
- refine2(maxiter=1000000, fix=None)
- refine2_wavelength(maxiter=1000000, fix=None)
Refine all parameters including the wavelength.
This implies that it enforces an upper limit to the wavelength depending on the number of rings.
- refine3(maxiter=1000000, fix=None)
Same as refine2 except it does not rely on upper_bound == lower_bound to fix parameters
This is a work around the regression introduced with scipy 1.5
- Parameters:
maxiter – maximum number of iteration for finding the solution
fix – parameters to be fixed. Does not assume the wavelength to be fixed by default
- Returns:
$sum_(2 heta_e-2 heta_i)²$
- residu1(param, d1, d2, rings)
- residu1_wavelength(param, d1, d2, rings)
- residu2(param, d1, d2, rings)
- residu2_wavelength(param, d1, d2, rings)
- residu2_wavelength_weighted(param, d1, d2, rings, weight)
- residu2_weighted(param, d1, d2, rings, weight)
- residu3(param, free, const, d1, d2, rings, weights=None)
Preform the calculation of $sum_(2 heta_e-2 heta_i)²$
- roca()
run roca to optimise the parameter set
- property rot1_max
- property rot1_min
- property rot2_max
- property rot2_min
- property rot3_max
- property rot3_min
- set_dist_max(value)
- set_dist_min(value)
- set_poni1_max(value)
- set_poni1_min(value)
- set_poni2_max(value)
- set_poni2_min(value)
- set_rot1_max(value)
- set_rot1_min(value)
- set_rot2_max(value)
- set_rot2_min(value)
- set_rot3_max(value)
- set_rot3_min(value)
- set_tolerance(value=10)
Set the tolerance for a refinement of the geometry; in percent of the original value
- Parameters:
value – Tolerance as a percentage
- set_wavelength_max(value)
- set_wavelength_min(value)
- simplex(maxiter=1000000)
- update_values(dist=None, wavelength=None, poni1=None, poni2=None, rot1=None, rot2=None, rot3=None, fixed=None)
Update values taking care of fixed parameters.
- property wavelength_max
- property wavelength_min
goniometer
Module
Everything you need to calibrate a detector mounted on a goniometer or any translation table
- class pyFAI.goniometer.BaseTransformation(funct, param_names, pos_names=None)
Bases:
object
This class, once instanciated, behaves like a function (via the __call__ method). It is responsible for taking any input geometry and translate it into a set of parameters compatible with pyFAI, i.e. a tuple with: (dist, poni1, poni2, rot1, rot2, rot3)
This class relies on a user provided function which does the work.
- __init__(funct, param_names, pos_names=None)
Constructor of the class
- Parameters:
funct – function which takes as parameter the param_names and the pos_name
param_names – list of names of the parameters used in the model
pos_names – list of motor names for gonio with >1 degree of freedom
- to_dict()
Export the instance representation for serialization as a dictionary
- class pyFAI.goniometer.ExtendedTransformation(dist_expr=None, poni1_expr=None, poni2_expr=None, rot1_expr=None, rot2_expr=None, rot3_expr=None, wavelength_expr=None, param_names=None, pos_names=None, constants=None, content=None)
Bases:
object
This class behaves like GeometryTransformation and extends transformation to the wavelength parameter.
This function uses numexpr for formula evaluation.
- __init__(dist_expr=None, poni1_expr=None, poni2_expr=None, rot1_expr=None, rot2_expr=None, rot3_expr=None, wavelength_expr=None, param_names=None, pos_names=None, constants=None, content=None)
Constructor of the class
- Parameters:
dist_expr – formula (as string) providing with the dist
poni1_expr – formula (as string) providing with the poni1
poni2_expr – formula (as string) providing with the poni2
rot1_expr – formula (as string) providing with the rot1
rot2_expr – formula (as string) providing with the rot2
rot3_expr – formula (as string) providing with the rot3
wavelength_expr – formula (as a string) to calculate wavelength used in angstrom
param_names – list of names of the parameters used in the model
pos_names – list of motor names for gonio with >1 degree of freedom
constants – a dictionary with some constants the user may want to use
content – Should be None or the name of the class (may be used in the future to dispatch to multiple derivative classes)
- to_dict()
Export the instance representation for serialization as a dictionary
- class pyFAI.goniometer.GeometryTransformation(dist_expr, poni1_expr, poni2_expr, rot1_expr, rot2_expr, rot3_expr, param_names, pos_names=None, constants=None, content=None)
Bases:
object
This class, once instanciated, behaves like a function (via the __call__ method). It is responsible for taking any input geometry and translate it into a set of parameters compatible with pyFAI, i.e. a tuple with: (dist, poni1, poni2, rot1, rot2, rot3) This function uses numexpr for formula evaluation.
- __init__(dist_expr, poni1_expr, poni2_expr, rot1_expr, rot2_expr, rot3_expr, param_names, pos_names=None, constants=None, content=None)
Constructor of the class
- Parameters:
dist_expr – formula (as string) providing with the dist
poni1_expr – formula (as string) providing with the poni1
poni2_expr – formula (as string) providing with the poni2
rot1_expr – formula (as string) providing with the rot1
rot2_expr – formula (as string) providing with the rot2
rot3_expr – formula (as string) providing with the rot3
param_names – list of names of the parameters used in the model
pos_names – list of motor names for gonio with >1 degree of freedom
constants – a dictionary with some constants the user may want to use
content – Should be None or the name of the class (may be used in the future to dispatch to multiple derivative classes)
- property dist_expr
- property poni1_expr
- property poni2_expr
- property rot1_expr
- property rot2_expr
- property rot3_expr
- to_dict()
Export the instance representation for serialization as a dictionary
- pyFAI.goniometer.GeometryTranslation
alias of
GeometryTransformation
- class pyFAI.goniometer.Goniometer(param, trans_function, detector='Detector', wavelength=None, param_names=None, pos_names=None)
Bases:
object
This class represents the goniometer model. Unlike this name suggests, it may include translation in addition to rotations
- __init__(param, trans_function, detector='Detector', wavelength=None, param_names=None, pos_names=None)
Constructor of the Goniometer class.
- Parameters:
param – vector of parameter to refine for defining the detector position on the goniometer
trans_function – function taking the parameters of the goniometer and the goniometer position and return the 6 parameters [dist, poni1, poni2, rot1, rot2, rot3]
detector – detector mounted on the moving arm
wavelength – the wavelength used for the experiment
param_names – list of names to “label” the param vector.
pos_names – list of names to “label” the position vector of the gonio.
- file_version = 'Goniometer calibration v2'
- get_ai(position)
Creates an azimuthal integrator from the motor position
- Parameters:
position – the goniometer position, a float for a 1 axis goniometer
- Returns:
A freshly build AzimuthalIntegrator
- get_mg(positions, unit='2th_deg', radial_range=(0, 180), azimuth_range=(-180, 180), empty=0.0, chi_disc=180)
Creates a MultiGeometry integrator from a list of goniometer positions.
- Parameters:
positions – A list of goniometer positions
radial_range – common range for integration
azimuthal_range – common range for integration
empty – value for empty pixels
chi_disc – if 0, set the chi_discontinuity at 0, else pi
- Returns:
A freshly build multi-geometry
- get_wavelength()
- save(filename)
Save the goniometer configuration to file
- Parameters:
filename – name of the file to save configuration to
- set_wavelength(value)
- classmethod sload(filename)
Class method for instanciating a Goniometer object from a JSON file
- Parameters:
filename – name of the JSON file
- Returns:
Goniometer object
- to_dict()
Export the goniometer configuration to a dictionary
- Returns:
Ordered dictionary
- property wavelength
- write(filename)
Save the goniometer configuration to file
- Parameters:
filename – name of the file to save configuration to
- class pyFAI.goniometer.GoniometerRefinement(param, pos_function, trans_function, detector='Detector', wavelength=None, param_names=None, pos_names=None, bounds=None)
Bases:
Goniometer
This class allow the translation of a goniometer geometry into a pyFAI geometry using a set of parameter to refine.
- __init__(param, pos_function, trans_function, detector='Detector', wavelength=None, param_names=None, pos_names=None, bounds=None)
Constructor of the GoniometerRefinement class
- Parameters:
param – vector of parameter to refine for defining the detector position on the goniometer
pos_function – a function taking metadata and extracting the goniometer position
trans_function – function taking the parameters of the goniometer and the gonopmeter position and return the 6/7 parameters [dist, poni1, poni2, rot1, rot2, rot3, wavelength]
detector – detector mounted on the moving arm
wavelength – the wavelength used for the experiment
param_names – list of names to “label” the param vector.
pos_names – list of names to “label” the position vector of the gonio.
bounds – list of 2-tuple with the lower and upper bound of each function
- calc_param3(fit_param, free, const)
Function that calculate the param vector
- Parameters:
fit_param – numpy array of float
free – names of the free parameters, array of same size as fit_param
const – dict with constant (non-fitted) parameters
- Returns:
the parameter vector as in self.param
- chi2(param=None)
Calculate the average of the square of the error for a given parameter set
- get_wavelength()
- new_geometry(label, image=None, metadata=None, control_points=None, calibrant=None, geometry=None)
Add a new geometry for calibration
- Parameters:
label – usually a string
image – 2D numpy array with the Debye scherrer rings
metadata – some metadata
control_points – an instance of ControlPoints
calibrant – the calibrant used for calibrating
geometry – poni or AzimuthalIntegrator instance.
- refine2(method='slsqp', **options)
Geometry refinement tool
See https://docs.scipy.org/doc/scipy-0.18.1/reference/generated/scipy.optimize.minimize.html
Nota: When upper and lower bounds are equal, the jacobian gets NaN since scipy 1.5.
- Parameters:
method – name of the minimizer
options – options for the minimizer
- Returns:
refined set of parameter
- refine3(fix=None, method='slsqp', verbose=True, **options)
Geometry refinement tool
- Parameters:
fixed – list of parameters to be fixed (others are left free for refinement)
method – name of the minimizer
options – options for the minimizer
- Returns:
refined set of parameter
- residu2(param)
Actually performs the calulation of the average of the error squared
- residu3(fit_param, free, const)
Evaluate the cost function:
- Parameters:
fit_param – numpy array of float
free – names of the free parameters, array of same size as fit_param
const – dict with constant (non-fitted) parameters
- Returns:
cost function value
- set_bounds(name, mini=None, maxi=None)
Redefines the bounds for the refinement
- Parameters:
name – name of the parameter or index in the parameter set
mini – minimum value
maxi – maximum value
- set_wavelength(value)
- classmethod sload(filename, pos_function=None)
Class method for instanciating a Goniometer object from a JSON file
- Parameters:
filename – name of the JSON file
pos_function – a function taking metadata and extracting the goniometer position
- Returns:
Goniometer object
- property wavelength
- class pyFAI.goniometer.PoniParam(dist, poni1, poni2, rot1, rot2, rot3)
Bases:
tuple
- dist
Alias for field number 0
- poni1
Alias for field number 1
- poni2
Alias for field number 2
- rot1
Alias for field number 3
- rot2
Alias for field number 4
- rot3
Alias for field number 5
- class pyFAI.goniometer.SingleGeometry(label, image=None, metadata=None, pos_function=None, control_points=None, calibrant=None, detector=None, geometry=None)
Bases:
object
This class represents a single geometry of a detector position on a goniometer arm
- __init__(label, image=None, metadata=None, pos_function=None, control_points=None, calibrant=None, detector=None, geometry=None)
Constructor of the SingleGeometry class, used for calibrating a multi-geometry setup with a moving detector.
- Parameters:
label – name of the geometry, a string or anything unmutable
image – image with Debye-Scherrer rings as 2d numpy array
metadata – anything which contains the goniometer position
pos_function – a function which takes the metadata as input and returns the goniometer arm position
control_points – a pyFAI.control_points.ControlPoints instance (optional parameter)
calibrant – a pyFAI.calibrant.Calibrant instance. Contains the wavelength to be used (optional parameter)
detector – a pyFAI.detectors.Detector instance or something like that Contains the mask to be used (optional parameter)
geometry – an azimuthal integrator or a ponifile (or a dict with the geometry) (optional parameter)
- extract_cp(max_rings=None, pts_per_deg=1.0, Imin=0)
Performs an automatic keypoint extraction and update the geometry refinement part
- Parameters:
max_ring – extract at most N rings from the image
pts_per_deg – number of control points per azimuthal degree (increase for better precision)
- get_ai()
Create a new azimuthal integrator to be used.
- Returns:
Azimuthal Integrator instance
- get_position()
This method is in charge of calculating the motor position from metadata/label/…
- get_wavelength()
- set_wavelength(value)
- property wavelength
spline
Module
This is piece of software aims at manipulating spline files describing for geometric corrections of the 2D detectors using cubic-spline.
Mainly used at ESRF with FReLoN CCD camera.
- class pyFAI.spline.Spline(filename=None)
Bases:
object
This class is a python representation of the spline file
Those file represent cubic splines for 2D detector distortions and makes heavy use of fitpack (dierckx in netlib) — A Python-C wrapper to FITPACK (by P. Dierckx). FITPACK is a collection of FORTRAN programs for curve and surface fitting with splines and tensor product splines. See _http://www.cs.kuleuven.ac.be/cwis/research/nalag/research/topics/fitpack.html or _http://www.netlib.org/dierckx/index.html
- __init__(filename=None)
This is the constructor of the Spline class.
- Parameters:
filename (str) – name of the ascii file containing the spline
- array2spline(smoothing=1000, timing=False)
Calculates the spline coefficients from the displacements matrix using fitpack.
- Parameters:
smoothing (float) – the greater the smoothing, the fewer the number of knots remaining
timing (bool) – print the profiling of the calculation
- bin(binning=None)
Performs the binning of a spline (same camera with different binning)
- Parameters:
binning – binning factor as integer or 2-tuple of integers
- Type:
int or (int, int)
- comparison(ref, verbose=False)
Compares the current spline distortion with a reference
- Parameters:
ref (Spline) – another spline file
verbose (bool) – print or not pylab plots
- Returns:
True or False depending if the splines are the same or not
- Return type:
bool
- correct(pos)
- fliplr(fit=True)
Flip the spline horizontally
- Parameters:
fit (bool) – set to False to disable fitting of the coef, or provide a value for the smoothing factor
- Returns:
new spline object
- fliplrud(fit=True)
Flip the spline upside-down and horizontally
- Parameters:
fit (bool) – set to False to disable fitting of the coef, or provide a value for the smoothing factor
- Returns:
new spline object
- flipud(fit=True)
Flip the spline upside-down
- Parameters:
fit (bool) – set to False to disable fitting of the coef, or provide a value for the smoothing factor
- Returns:
new spline object
- getDetectorSize()
Returns the size of the detector.
- Return type:
Tuple[int,int]
- Returns:
Size y then x
- getPixelSize()
Return the size of the pixel from as a 2-tuple of floats expressed in meters.
- Returns:
the size of the pixel from a 2D detector
- Return type:
2-tuple of floats expressed in meter.
- read(filename)
read an ascii spline file from file
- Parameters:
filename (str) – file containing the cubic spline distortion file
- setPixelSize(pixelSize)
Sets the size of the pixel from a 2-tuple of floats expressed in meters.
- Param:
pixel size in meter
- spline2array(timing=False)
Calculates the displacement matrix using fitpack bisplev(x, y, tck, dx = 0, dy = 0)
- Parameters:
timing (bool) – profile the calculation or not
- Returns:
xDispArray, yDispArray
- Return type:
2-tuple of ndarray
Evaluate a bivariate B-spline and its derivatives. Return a rank-2 array of spline function values (or spline derivative values) at points given by the cross-product of the rank-1 arrays x and y. In special cases, return an array or just a float if either x or y or both are floats.
- splineFuncX(x, y, list_of_points=False)
Calculates the displacement matrix using fitpack for the X direction on the given grid.
- Parameters:
x (ndarray) – points of the grid in the x direction
y (ndarray) – points of the grid in the y direction
list_of_points – if true, consider the zip(x,y) instead of the of the square array
- Returns:
displacement matrix for the X direction
- Return type:
ndarray
- splineFuncY(x, y, list_of_points=False)
calculates the displacement matrix using fitpack for the Y direction
- Parameters:
x (ndarray) – points in the x direction
y (ndarray) – points in the y direction
list_of_points – if true, consider the zip(x,y) instead of the of the square array
- Returns:
displacement matrix for the Y direction
- Return type:
ndarray
- tilt(center=(0.0, 0.0), tiltAngle=0.0, tiltPlanRot=0.0, distanceSampleDetector=1.0, timing=False)
The tilt method apply a virtual tilt on the detector, the point of tilt is given by the center
- Parameters:
center (2-tuple of floats) – position of the point of tilt, this point will not be moved.
tiltAngle (float in the range [-90:+90] degrees) – the value of the tilt in degrees
tiltPlanRot (Float in the range [-180:180]) – the rotation of the tilt plan with the Ox axis (0 deg for y axis invariant, 90 deg for x axis invariant)
distanceSampleDetector (float) – the distance from sample to detector in meter (along the beam, so distance from sample to center)
- Returns:
tilted Spline instance
- Return type:
- write(filename)
save the cubic spline in an ascii file usable with Fit2D or SPD
- Parameters:
filename (str) – name of the file containing the cubic spline distortion file
- writeEDF(basename)
save the distortion matrices into a couple of files called basename-x.edf and basename-y.edf
- Parameters:
basename (str) – base of the name used to save the data
- zeros(xmin=0.0, ymin=0.0, xmax=2048.0, ymax=2048.0, pixSize=None)
Defines a spline file with no ( zero ) displacement.
- Parameters:
xmin (float) – minimum coordinate in x, usually zero
xmax (float) – maximum coordinate in x (+1) usually 2048
ymin (float) – minimum coordinate in y, usually zero
ymax (float) – maximum coordinate y (+1) usually 2048
pixSize (float) – size of the pixel
- zeros_like(other)
Defines a spline file with no ( zero ) displacement with the same shape as the other one given.
- Parameters:
other (Spline instance) – another Spline instance
control_points
Module
ControlPoints: a set of control points associated with a calibration image
PointGroup: a group of points
- class pyFAI.control_points.ControlPoints(filename=None, calibrant=None, wavelength=None)
Bases:
object
This class contains a set of control points with (optionally) their ring number hence d-spacing and diffraction 2Theta angle…
- __init__(filename=None, calibrant=None, wavelength=None)
- append(points, ring=None, annotate=None, plot=None)
Append a group of points to a given ring
- Parameters:
point – list of points
ring – ring number
annotate – matplotlib.annotate reference
plot – matplotlib.plot reference
- Returns:
PointGroup instance
- append_2theta_deg(points, angle=None, ring=None)
Append a group of points to a given ring
- Parameters:
point – list of points
angle – 2-theta angle in degrees
- Param:
ring: ring number
- check()
check internal consistency of the class, disabled for now
- property dSpacing
- get(ring=None, lbl=None)
Retireves the last group of points for a given ring (by default the last)
- Parameters:
ring – index of ring to search for
lbl – label of the group to retrieve
- getList()
Retrieve the list of control points suitable for geometry refinement with ring number
- getList2theta()
Retrieve the list of control points suitable for geometry refinement
- getListRing()
Retrieve the list of control points suitable for geometry refinement with ring number
- getWeightedList(image)
Retrieve the list of control points suitable for geometry refinement with ring number and intensities :param image: :return: a (x,4) array with pos0, pos1, ring nr and intensity
#TODO: refine the value of the intensity using 2nd order polynomia
- get_dSpacing()
- get_labels()
Retieve the list of labels
- Returns:
list of labels as string
- get_wavelength()
- load(filename)
load all control points from a file
- pop(ring=None, lbl=None)
Remove the set of points, either from its code or from a given ring (by default the last)
- Parameters:
ring – index of ring of which remove the last group
lbl – code of the ring to remove
- readRingNrFromKeyboard()
Ask the ring number values for the given points
- reset()
remove all stored values and resets them to default
- save(filename)
Save a set of control points to a file :param filename: name of the file :return: None
- setWavelength_change2th(value=None)
- setWavelength_changeDs(value=None)
This is probably not a good idea, but who knows !
- set_dSpacing(lst)
- set_wavelength(value=None)
- property wavelength
- class pyFAI.control_points.PointGroup(points=None, ring=None, annotate=None, plot=None, force_label=None)
Bases:
object
Class contains a group of points … They all belong to the same Debye-Scherrer ring
- __init__(points=None, ring=None, annotate=None, plot=None, force_label=None)
Constructor
- Parameters:
points – list of points
ring – ring number
annotate – reference to the matplotlib annotate output
plot – reference to the matplotlib plot
force_label – allows to enforce the label
- property code
Numerical value for the label: mainly for sorting
- classmethod get_label()
return the next label
- get_ring()
- property label
- last_label = 0
- classmethod reset_label()
reset intenal counter
- property ring
- classmethod set_label(label)
update the internal counter if needed
- set_ring(value)
massif
Module
- class pyFAI.massif.Massif(data=None, mask=None, median_prefilter=False)
Bases:
object
A massif is defined as an area around a peak, it is used to find neighboring peaks
- TARGET_SIZE = 1024
- __init__(data=None, mask=None, median_prefilter=False)
Constructor of the Massif class
- Parameters:
data – 2D array or filename (discouraged)
mask – array with non zero for invalid data
median_prefilter – apply a 3x3 median prefilter to the data to sieve out outliers
- calculate_massif(x)
defines a map of the massif around x and returns the mask
- property cleaned_data
- find_peaks(x, nmax=200, annotate=None, massif_contour=None, stdout=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)
All in one function that finds a maximum from the given seed (x) then calculates the region extension and extract position of the neighboring peaks.
- Parameters:
x (Tuple[int]) – coordinates of the peak, seed for the calculation
nmax (int) – maximum number of peak per region
annotate – callback method taking number of points + coordinate as input.
massif_contour – callback to show the contour of a massif with the given index.
stdout – this is the file where output is written by default.
- Returns:
list of peaks
- get_binned_data()
- Returns:
binned data
- get_blurred_data()
- Returns:
a blurred image
- get_labeled_massif(pattern=None, reconstruct=True)
- Parameters:
pattern – 3x3 matrix
reconstruct – if False, split massif at masked position, else reconstruct missing part.
- Returns:
an image composed of int with a different value for each massif
- get_median_data()
- Returns:
a spatial median filtered image 3x3
- init_valley_size()
- log_info
If true, more information is displayed in the logger relative to picking.
- nearest_peak(x)
- Parameters:
x – coordinates of the peak
- Returns:
the coordinates of the nearest peak
- peaks_from_area(mask, Imin=-1.7976931348623157e+308, keep=1000, dmin=0.0, seed=None, **kwarg)
Return the list of peaks within an area
- Parameters:
mask – 2d array with mask.
Imin – minimum of intensity above the background to keep the point
keep – maximum number of points to keep
kwarg – ignored parameters
dmin – minimum distance to another peak
seed – list of good guesses to start with
- Returns:
list of peaks [y,x], [y,x], …]
- property valley_size
Defines the minimum distance between two massifs
blob_detection
Module
- class pyFAI.blob_detection.BlobDetection(img, cur_sigma=0.25, init_sigma=0.5, dest_sigma=1, scale_per_octave=2, mask=None)
Bases:
object
Performs a blob detection: http://en.wikipedia.org/wiki/Blob_detection using a Difference of Gaussian + Pyramid of Gaussians
- __init__(img, cur_sigma=0.25, init_sigma=0.5, dest_sigma=1, scale_per_octave=2, mask=None)
Performs a blob detection: http://en.wikipedia.org/wiki/Blob_detection using a Difference of Gaussian + Pyramid of Gaussians
- Parameters:
img – input image
cur_sigma – estimated smoothing of the input image. 0.25 correspond to no interaction between pixels.
init_sigma – start searching at this scale (sigma=0.5: 10% interaction with first neighbor)
dest_sigma – sigma at which the resolution is lowered (change of octave)
scale_per_octave – Number of scale to be performed per octave
mask – mask where pixel are not valid
- direction()
Perform and plot the two main directions of the peaks, considering their previously calculated scale ,by calculating the Hessian at different sizes as the combination of gaussians and their first and second derivatives
- nearest_peak(p, refine=True, Imin=None)
Return the nearest peak from a position
- Parameters:
p – input position (y,x) 2-tuple of float
refine – shall the position be refined on the raw data
Imin – minimum of intensity above the background
- peaks_from_area(mask, keep=None, refine=True, Imin=None, dmin=0.0, **kwargs)
Return the list of peaks within an area
- Parameters:
mask – 2d array with mask.
refine – shall the position be refined on the raw data
Imin – minimum of intensity above the background
kwarg – ignored parameters
- Returns:
list of peaks [y,x], [y,x], …]
- process(max_octave=None)
Perform the keypoint extraction for max_octave cycles or until all octaves have been processed. :param max_octave: number of octave to process
- refine_Hessian(kpx, kpy, kps)
Refine the keypoint location based on a 3 point derivative, and delete non-coherent keypoints.
- Parameters:
kpx – x_pos of keypoint
kpy – y_pos of keypoint
kps – s_pos of keypoint
- Returns:
arrays of corrected coordinates of keypoints, values and locations of keypoints
- refine_Hessian_SG(kpx, kpy, kps)
Savitzky Golay algorithm to check if a point is really the maximum :param kpx: x_pos of keypoint :param kpy: y_pos of keypoint :param kps: s_pos of keypoint :return: array of corrected keypoints
- refinement()
- show_neighboor()
- show_stats()
Shows a window with the repartition of keypoint in function of scale/intensity
- tresh = 0.6
- pyFAI.blob_detection.image_test()
- pyFAI.blob_detection.local_max(dogs, mask=None, n_5=True)
- Parameters:
dogs – 3d array with (sigma,y,x) containing difference of gaussians
mask – mask out keypoint next to the mask (or inside the mask)
n_5 – look for a larger neighborhood
- pyFAI.blob_detection.make_gaussian(im, sigma, xc, yc)
calibrant
Module
Calibrant
A module containing classical calibrant and also tools to generate d-spacing.
Interesting formula: http://geoweb3.princeton.edu/research/MineralPhy/xtalgeometry.pdf
- pyFAI.calibrant.CALIBRANT_FACTORY = Calibrants available: mock, PBBA, Pt, LaB6, alpha_Al2O3, TiO2, Al, AgBh, ZnO, CrOx, NaCl, LaB6_SRM660b, CuO, Si, Si_SRM640b, Cr2O3, Si_SRM640c, Si_SRM640d, vanadinite, CeO2, Si_SRM640a, quartz, C14H30O, LaB6_SRM660a, Au, LaB6_SRM660c, Si_SRM640e, cristobaltite, Si_SRM640, hydrocerussite, Ni
Default calibration factory provided by the library.
- class pyFAI.calibrant.Calibrant(filename: str | None = None, dSpacing: List[float] | None = None, wavelength: float | None = None)
Bases:
object
A calibrant is a named reference compound where the d-spacing are known.
The d-spacing (interplanar distances) are expressed in Angstrom (in the file).
If the access is don’t from a file, the IO are delayed. If it is not desired one could explicitly access to
load_file()
.c = Calibrant() c.load_file("my_calibrant.D")
- Parameters:
filename – A filename containing the description (usually with .D extension). The access to the file description is delayed until the information is needed.
dSpacing – A list of d spacing in Angstrom.
wavelength – A wavelength in meter
- __init__(filename: str | None = None, dSpacing: List[float] | None = None, wavelength: float | None = None)
- append_2th(value: float)
Insert a 2th position at the right position of the dSpacing list
- append_dSpacing(value: float)
Insert a d position at the right position of the dSpacing list
- count_registered_dSpacing() int
Count of registered dSpacing positions.
- property dSpacing: List[float]
- fake_calibration_image(ai, shape=None, Imax=1.0, U=0, V=0, W=0.0001) ndarray
Generates a fake calibration image from an azimuthal integrator.
- Parameters:
ai – azimuthal integrator
Imax – maximum intensity of rings
W (U, V,) – width of the peak from Caglioti’s law (FWHM^2 = Utan(th)^2 + Vtan(th) + W)
- property filename: str
- get_2th() List[float]
Returns the 2theta positions for all peaks (cached)
- get_2th_index(angle: float, delta: float | None = None) int
Returns the index in the 2theta angle index.
- Parameters:
angle – expected angle in radians
delta – precision on angle
- Returns:
0-based index or None
- get_dSpacing() List[float]
- get_filename() str
- get_max_wavelength(index: int | None = None)
Calculate the maximum wavelength assuming the ring at index is visible.
Bragg’s law says: $lambda = 2d sin(theta)$ So at 180° $lambda = 2d$
- Parameters:
index – Ring number, otherwise assumes all rings are visible
- Returns:
the maximum visible wavelength
- get_peaks(unit: str = '2th_deg')
Calculate the peak position as this unit.
- Returns:
numpy array (unlike other methods which return lists)
- get_wavelength() float | None
Returns the used wavelength.
- load_file(filename: str)
Load a calibrant.from file.
- Parameters:
filename – The filename containing the calibrant description.
- property name: str
Returns a short name describing the calibrant.
It’s the name of the file or the resource.
- save_dSpacing(filename: str | None = None)
Save the d-spacing to a file.
- setWavelength_change2th(value: float | None = None)
Set a new wavelength.
- setWavelength_changeDs(value: float | None = None)
Set a new wavelength and only update the dSpacing list.
This is probably not a good idea, but who knows!
- set_dSpacing(lst: List[float])
- set_wavelength(value: float | None = None)
Set a new wavelength .
- property wavelength: float | None
Returns the used wavelength.
- class pyFAI.calibrant.CalibrantFactory(basedir=None)
Bases:
object
Behaves like a dict but is actually a factory:
Each time one retrieves an object it is a new geniune new calibrant (unmodified)
- __init__(basedir=None)
Constructor
- Parameters:
basedir – directory name where to search for the calibrants
- get(what: str, notfound=None)
- has_key(k: str)
- items()
- keys()
- values()
- class pyFAI.calibrant.Cell(a=1, b=1, c=1, alpha=90, beta=90, gamma=90, lattice='triclinic', lattice_type='P')
Bases:
object
This is a cell object, able to calculate the volume and d-spacing according to formula from:
http://geoweb3.princeton.edu/research/MineralPhy/xtalgeometry.pdf
- __init__(a=1, b=1, c=1, alpha=90, beta=90, gamma=90, lattice='triclinic', lattice_type='P')
Constructor of the Cell class:
Crystalographic units are Angstrom for distances and degrees for angles !
- Parameters:
a,b,c – unit cell length in Angstrom
gamma (alpha, beta,) – unit cell angle in degrees
lattice – “cubic”, “tetragonal”, “hexagonal”, “rhombohedral”, “orthorhombic”, “monoclinic”, “triclinic”
lattice_type – P, I, F, C or R
- classmethod cubic(a, lattice_type='P')
Factory for cubic lattices
- Parameters:
a – unit cell length
- d(hkl)
Calculate the actual d-spacing for a 3-tuple of integer representing a family of Miller plans
- Parameters:
hkl – 3-tuple of integers
- Returns:
the inter-planar distance
- d_spacing(dmin=1.0)
Calculate all d-spacing down to dmin
applies selection rules
- Parameters:
dmin – minimum value of spacing requested
- Returns:
dict d-spacing as string, list of tuple with Miller indices preceded with the numerical value
- classmethod diamond(a)
Factory for Diamond type FCC like Si and Ge
- Parameters:
a – unit cell length
- get_type()
- classmethod hexagonal(a, c, lattice_type='P')
Factory for hexagonal lattices
- Parameters:
a – unit cell length
c – unit cell length
- lattices = ['cubic', 'tetragonal', 'hexagonal', 'rhombohedral', 'orthorhombic', 'monoclinic', 'triclinic']
- classmethod monoclinic(a, b, c, beta, lattice_type='P')
Factory for hexagonal lattices
- Parameters:
a – unit cell length
b – unit cell length
c – unit cell length
beta – unit cell angle
- classmethod orthorhombic(a, b, c, lattice_type='P')
Factory for orthorhombic lattices
- Parameters:
a – unit cell length
b – unit cell length
c – unit cell length
- classmethod rhombohedral(a, alpha, lattice_type='P')
Factory for hexagonal lattices
- Parameters:
a – unit cell length
alpha – unit cell angle
- save(name, long_name=None, doi=None, dmin=1.0, dest_dir=None)
Save informations about the cell in a d-spacing file, usable as Calibrant
- Parameters:
name – name of the calibrant
doi – reference of the publication used to parametrize the cell
dmin – minimal d-spacing
dest_dir – name of the directory where to save the result
- selection_rules
contains a list of functions returning True(allowed)/False(forbiden)/None(unknown)
- set_type(lattice_type)
- classmethod tetragonal(a, c, lattice_type='P')
Factory for tetragonal lattices
- Parameters:
a – unit cell length
c – unit cell length
- property type
- types = {'C': 'Side centered', 'F': 'Face centered', 'I': 'Body centered', 'P': 'Primitive', 'R': 'Rhombohedral'}
- property volume
- pyFAI.calibrant.calibrant_factory(basedir=None)
- pyFAI.calibrant.get_calibrant(calibrant_name: str, wavelength: float | None = None) Calibrant
Returns a new instance of the calibrant by it’s name.
- Parameters:
calibrant_name – Name of the calibrant
wavelength – initialize the calibrant with the given wavelength (in m)
- pyFAI.calibrant.names() List[str]
Returns the list of registred calibrant names.
distortion
Module
- class pyFAI.distortion.Distortion(detector='detector', shape=None, resize=False, empty=0, mask=None, method='csr', device=None, workgroup=None)
Bases:
object
This class applies a distortion correction on an image.
New version compatible both with CSR and LUT…
- __init__(detector='detector', shape=None, resize=False, empty=0, mask=None, method='csr', device=None, workgroup=None)
- Parameters:
detector – detector instance or detector name
shape – shape of the output image
resize – allow the output shape to be different from the input shape
empty – value to be given for empty bins
method – “lut” or “csr”, the former is faster
device – Name of the device: None for OpenMP, “cpu” or “gpu” or the id of the OpenCL device a 2-tuple of integer
workgroup – workgroup size for CSR on OpenCL
- calc_LUT(use_common=True)
Calculate the Look-up table
- Returns:
look up table either in CSR or LUT format depending on self.method
- calc_LUT_regular()
Calculate the Look-up table for a regular detector ….
- calc_init()
Initialize all arrays
- calc_pos(use_cython=True)
Calculate the pixel boundary position on the regular grid
- Returns:
pixel corner positions (in pixel units) on the regular grid
- Return type:
ndarray of shape (nrow, ncol, 4, 2)
- calc_size(use_cython=True)
Calculate the number of pixels falling into every single bin and
- Returns:
max of pixel falling into a single bin
Considering the “half-CCD” spline from ID11 which describes a (1025,2048) detector, the physical location of pixels should go from: [-17.48634 : 1027.0543, -22.768829 : 2028.3689] We chose to discard pixels falling outside the [0:1025,0:2048] range with a lose of intensity
- correct(image, dummy=None, delta_dummy=None)
Correct an image based on the look-up table calculated …
- Parameters:
image – 2D-array with the image
dummy – value suggested for bad pixels
delta_dummy – precision of the dummy value
- Returns:
corrected 2D image
- correct_ng(image, variance=None, dark=None, flat=None, solidangle=None, polarization=None, dummy=None, delta_dummy=None, normalization_factor=1.0)
Correct an image based on the look-up table calculated … Like the integrate_ng it provides * Dark current correction * Normalisation with flatfield (or solid angle, polarization, absorption, …) * Error propagation
- Parameters:
image – 2D-array with the image
variance – 2D-array with the associated image
dark – array with dark-current values
flat – array with values for a flat image
solidangle – solid-angle array
polarization – numpy array with 2D polarization corrections
dummy – value suggested for bad pixels
delta_dummy – precision of the dummy value
normalization_factor – multiply all normalization with this value
- Returns:
corrected 2D image
- reset(method=None, device=None, workgroup=None, prepare=True)
reset the distortion correction and re-calculate the look-up table
- Parameters:
method – can be “lut” or “csr”, “lut” looks faster
device – can be None, “cpu” or “gpu” or the id as a 2-tuple of integer
worgroup – enforce the workgroup size for CSR.
prepare – set to false to only reset and not re-initialize
- property shape_out
Calculate/cache the output shape
- Returns:
output shape
- uncorrect(image, use_cython=False)
Take an image which has been corrected and transform it into it’s raw (with loss of information)
- Parameters:
image – 2D-array with the image
- Returns:
uncorrected 2D image
Nota: to retrieve the input mask on can do:
>>> msk = dis.uncorrect(numpy.ones(dis._shape_out)) <= 0
- class pyFAI.distortion.Quad(buffer)
Bases:
object
Quad modelisation.
- __init__(buffer)
- calc_area()
- calc_area_AB(I1, I2)
- calc_area_BC(J1, J2)
- calc_area_CD(K1, K2)
- calc_area_DA(L1, L2)
- calc_area_old()
- calc_area_vectorial()
- get_box(i, j)
- get_box_size0()
- get_box_size1()
- get_idx(i, j)
- get_offset0()
- get_offset1()
- init_slope()
- integrateAB(start, stop, calc_area)
- populate_box()
- reinit(A0, A1, B0, B1, C0, C1, D0, D1)
- pyFAI.distortion.resize_image_2D_numpy(image, shape_in)
numpy implementation of resize_image_2D
units
Module
Manages the different units
Nota for developers: this module is used a singleton to store all units in a unique manner. This explains the number of top-level variables on the one hand and their CAPITALIZATION on the other.
- pyFAI.units.CONST_hc = 12.398419843320026
Product of h the Planck constant, and c the speed of light in vacuum in Angstrom.KeV. It is approximatively equal to:
pyFAI reference: 12.398419292004204
scipy v1.3.1: 12.398419739640717
scipy-1.4.0rc1: 12.398419843320026
- pyFAI.units.CONST_q = 1.602176634e-19
One electron-volt is equal to 1.602176634⋅10-19 joules
- class pyFAI.units.Unit(name, scale=1, label=None, equation=None, formula=None, center=None, corner=None, delta=None, short_name=None, unit_symbol=None, positive=True, period=None)
Bases:
object
Represents a unit.
It has at least a name and a scale (in SI-unit)
- __init__(name, scale=1, label=None, equation=None, formula=None, center=None, corner=None, delta=None, short_name=None, unit_symbol=None, positive=True, period=None)
Constructor of a unit.
- Parameters:
name (str) – name of the unit
scale (float) – scale of the unit to go to SI
label (str) – label for nice representation in matplotlib, can use latex representation
equation (func) – equation to calculate the value from coordinates (x,y,z) in detector space. Parameters of the function are x, y, z, wavelength
formula (str) – string with the mathematical formula. Valid variable names are x, y, z, λ and the constant π
center (str) – name of the fast-path function
unit_symbol (str) – symbol used to display values of this unit
positive (bool) – this value can only be positive
period – None or the periodicity of the unit (angles are periodic)
- get(key)
Mimics the dictionary interface
- Parameters:
key (str) – key wanted
- Returns:
self.key
- pyFAI.units.eq_2th(x, y, z, wavelength=None)
Calculates the 2theta aperture of the cone
- Parameters:
x – horizontal position, towards the center of the ring, from sample position
y – vertical position, to the roof, from sample position
z – distance from sample along the beam
wavelength – in meter
- Returns:
opening angle 2θ in radian
- pyFAI.units.eq_q(x, y, z, wavelength)
Calculates the modulus of the scattering vector
- Parameters:
x – horizontal position, towards the center of the ring, from sample position
y – vertical position, to the roof, from sample position
z – distance from sample along the beam
wavelength – in meter
- Returns:
modulus of the scattering verctor q in inverse nm
- pyFAI.units.eq_r(x, y, z=None, wavelength=None)
Calculates the radius in meter
- Parameters:
x – horizontal position, towards the center of the ring, from sample position
y – vertical position, to the roof, from sample position
z – distance from sample along the beam
wavelength – in meter
- Returns:
radius in meter
- pyFAI.units.register_azimuthal_unit(name, scale=1, label=None, equation=None, formula=None, center=None, corner=None, delta=None, short_name=None, unit_symbol=None, positive=False, period=None)
- pyFAI.units.register_radial_unit(name, scale=1, label=None, equation=None, formula=None, center=None, corner=None, delta=None, short_name=None, unit_symbol=None, positive=True, period=None)
- pyFAI.units.to_unit(obj, type_=None)
Convert to Unit object
- Parameters:
obj – can be a unit or a string like “2th_deg”
type – family of units like AZIMUTHAL_UNITS or RADIAL_UNITS
- Returns:
Unit instance
worker
Module
This module contains the Worker class:
A tool able to perform azimuthal integration with: additional saving capabilities like
save as 2/3D structure in a HDF5 File
read from HDF5 files
Aims at being integrated into a plugin like LImA or as model for the GUI
The configuration of this class is mainly done via a dictionary transmitted as a JSON string: Here are the valid keys:
“dist”
“poni1”
“poni2”
“rot1”
“rot3”
“rot2”
“pixel1”
“pixel2”
“splineFile”
“wavelength”
“poni” #path of the file
“chi_discontinuity_at_0”
“do_mask”
“do_dark”
“do_azimuthal_range”
“do_flat”
“do_2D”
“azimuth_range_min”
“azimuth_range_max”
“polarization_factor”
“nbpt_rad”
“do_solid_angle”
“do_radial_range”
“error_model”
“delta_dummy”
“nbpt_azim”
“flat_field”
“radial_range_min”
“dark_current”
“do_polarization”
“mask_file”
“detector”
“unit”
“radial_range_max”
“val_dummy”
“do_dummy”
“method”
- class pyFAI.worker.DistortionWorker(detector=None, dark=None, flat=None, solidangle=None, polarization=None, mask=None, dummy=None, delta_dummy=None, method='LUT', device=None)
Bases:
object
Simple worker doing dark, flat, solid angle and polarization correction
- __init__(detector=None, dark=None, flat=None, solidangle=None, polarization=None, mask=None, dummy=None, delta_dummy=None, method='LUT', device=None)
Constructor of the worker :param dark: array :param flat: array :param solidangle: solid-angle array :param polarization: numpy array with 2D polarization corrections :param dummy: value for bad pixels :param delta_dummy: precision for dummies :param method: LUT or CSR for the correction :param device: Used to influance OpenCL behavour: can be “cpu”, “GPU”, “Acc” or even an OpenCL context
- process(data, variance=None, normalization_factor=1.0)
Process the data and apply a normalization factor :param data: input data :param variance: the variance associated to the data :param normalization: normalization factor :return: processed data as either an array (data) or two (data, error)
- class pyFAI.worker.PixelwiseWorker(dark=None, flat=None, solidangle=None, polarization=None, mask=None, dummy=None, delta_dummy=None, device=None, empty=None, dtype='float32')
Bases:
object
Simple worker doing dark, flat, solid angle and polarization correction
- __init__(dark=None, flat=None, solidangle=None, polarization=None, mask=None, dummy=None, delta_dummy=None, device=None, empty=None, dtype='float32')
Constructor of the worker
- Parameters:
dark – array
flat – array
solidangle – solid-angle array
polarization – numpy array with 2D polarization corrections
device – Used to influance OpenCL behavour: can be “cpu”, “GPU”, “Acc” or even an OpenCL context
empty – value given for empty pixels by default
dtype – unit (and precision) in which to perform calculation: float32 or float64
- process(data, variance=None, normalization_factor=None, use_cython=True)
Process the data and apply a normalization factor :param data: input data :param variance: the variance associated to the data :param normalization: normalization factor :return: processed data, optionally with the assiciated error if variance is provided
- class pyFAI.worker.Worker(azimuthalIntegrator=None, shapeIn=(2048, 2048), shapeOut=(360, 500), unit='r_mm', dummy=None, delta_dummy=None, method=('bbox', 'csr', 'cython'), integrator_name=None, extra_options=None)
Bases:
object
- __init__(azimuthalIntegrator=None, shapeIn=(2048, 2048), shapeOut=(360, 500), unit='r_mm', dummy=None, delta_dummy=None, method=('bbox', 'csr', 'cython'), integrator_name=None, extra_options=None)
- Parameters:
azimuthalIntegrator (AzimuthalIntegrator) – An AzimuthalIntegrator instance
shapeIn (tuple) – image size in input
shapeOut (tuple) – Integrated size: can be (1,2000) for 1D integration
unit (str) – can be “2th_deg, r_mm or q_nm^-1 …
dummy (float) – the value making invalid pixels
delta_dummy (float) – the precision for dummy values
method – integration method: str like “csr” or tuple (“bbox”, “csr”, “cython”) or IntegrationMethod instance.
integrator_name (str) – Offers an alternative to “integrate1d” like “sigma_clip_ng”
extra_options (dict) – extra kwargs for the integrator (like {“max_iter”:3, “thres”:0, “error_model”: “azimuthal”} for sigma-clipping)
- do_2D()
- get_config()
Returns the configuration as a dictionary.
- Returns:
dict with the config to be de-serialized with set_config/loaded with pyFAI.load
- get_json_config()
return configuration as a JSON string
- get_normalization_factor()
- get_unit()
- property nbpt_azim
- property normalization_factor
- process(data, variance=None, normalization_factor=1.0, writer=None, metadata=None)
Process one frame #TODO: dark, flat, sa are missing
- Parameters:
data – numpy array containing the input image
writer – An open writer in which ‘write’ will be called with the result of the integration
- reconfig(shape=(2048, 2048), sync=False)
This is just to force the integrator to initialize with a given input image shape
- Parameters:
shape – shape of the input image
sync – return only when synchronized
- reset()
this is just to force the integrator to initialize
- save_config(filename=None)
Save the configuration as a JSON file
- setDarkcurrentFile(imagefile)
- setExtension(ext)
enforce the extension of the processed data file written
- setFlatfieldFile(imagefile)
- setJsonConfig(json_file)
- setSubdir(path)
Set the relative or absolute path for processed data
- set_config(config, consume_keys=False)
Configure the working from the dictionary.
- Parameters:
config (dict) – Key-value configuration
consume_keys (bool) – If true the keys from the dictionary will be consumed when used.
- set_dark_current_file(imagefile)
- set_flat_field_file(imagefile)
- set_json_config(json_file)
- set_method(method='csr')
Set the integration method
- set_normalization_factor(value)
- set_unit(value)
- property unit
- update_processor(integrator_name=None)
- static validate_config(config, raise_exception=<class 'RuntimeError'>)
Validates a configuration for any inconsitencies
- Parameters:
config – dict contraining the configuration
raise_exception – Exception class to raise when configuration is not consistant
- Returns:
None or reason as a string when raise_exception is None, else raise the given exception
- warmup(sync=False)
Process a dummy image to ensure everything is initialized
- Parameters:
sync – wait for processing to be finished
- pyFAI.worker.make_ai(config, consume_keys=False)
Create an Azimuthal integrator from the configuration.
- Parameters:
config – Key-value dictionary with all parameters
consume_keys (bool) – If true the keys from the dictionary will be consumed when used.
- Returns:
A configured (but uninitialized)
AzimuthalIntgrator
.
containers
Module
Module containing holder classes, like returned objects.
- class pyFAI.containers.ErrorModel(value)
Bases:
IntEnum
An enumeration.
- AZIMUTHAL = 3
- HYBRID = 4
- NO = 0
- POISSON = 2
- VARIANCE = 1
- as_str()
- property do_variance
- classmethod parse(value)
- property poissonian
- class pyFAI.containers.Integrate1dResult(radial, intensity, sigma=None)
Bases:
IntegrateResult
Result of an 1D integration. Provide a tuple access as a simple way to reach main attrbutes. Default result, extra results, and some interagtion parameters are available from attributes.
For compatibility with older API, the object can be read as a tuple in different ways:
result = ai.integrate1d(...) if result.sigma is None: radial, I = result else: radial, I, sigma = result
- __init__(radial, intensity, sigma=None)
- property intensity
Regrouped intensity
- Return type:
numpy.ndarray
- property radial
Radial positions (q/2theta/r)
- Return type:
numpy.ndarray
- property sigma
Error array if it was requested
- Return type:
numpy.ndarray, None
- class pyFAI.containers.Integrate1dtpl(position, intensity, sigma, signal, variance, normalization, count, std, sem, norm_sq)
Bases:
tuple
- count
Alias for field number 6
- intensity
Alias for field number 1
- norm_sq
Alias for field number 9
- normalization
Alias for field number 5
- position
Alias for field number 0
- sem
Alias for field number 8
- sigma
Alias for field number 2
- signal
Alias for field number 3
- std
Alias for field number 7
- variance
Alias for field number 4
- class pyFAI.containers.Integrate2dResult(intensity, radial, azimuthal, sigma=None)
Bases:
IntegrateResult
Result of an 2D integration. Provide a tuple access as a simple way to reach main attrbutes. Default result, extra results, and some interagtion parameters are available from attributes.
For compatibility with older API, the object can be read as a tuple in different ways:
result = ai.integrate2d(...) if result.sigma is None: I, radial, azimuthal = result else: I, radial, azimuthal, sigma = result
- __init__(intensity, radial, azimuthal, sigma=None)
- property azimuthal
Azimuthal positions (chi)
- Return type:
numpy.ndarray
- property azimuthal_unit
Radial unit
- Return type:
string
- property intensity
Azimuthaly regrouped intensity
- Return type:
numpy.ndarray
- property radial
Radial positions (q/2theta/r)
- Return type:
numpy.ndarray
- property radial_unit
Radial unit
- Return type:
string
- property sigma
Error array if it was requested
- Return type:
numpy.ndarray, None
- class pyFAI.containers.Integrate2dtpl(radial, azimuthal, intensity, sigma, signal, variance, normalization, count, std, sem, norm_sq)
Bases:
tuple
- azimuthal
Alias for field number 1
- count
Alias for field number 7
- intensity
Alias for field number 2
- norm_sq
Alias for field number 10
- normalization
Alias for field number 6
- radial
Alias for field number 0
- sem
Alias for field number 9
- sigma
Alias for field number 3
- signal
Alias for field number 4
- std
Alias for field number 8
- variance
Alias for field number 5
- class pyFAI.containers.IntegrateResult
Bases:
tuple
Class defining shared information between Integrate1dResult and Integrate2dResult.
- __init__()
- property compute_engine
return the name of the compute engine, like CSR
- property count
Count information
- Return type:
numpy.ndarray
- property error_model
- property has_dark_correction
True if a dark correction was applied
- Return type:
bool
- property has_flat_correction
True if a flat correction was applied
- Return type:
bool
- property has_mask_applied
True if a mask was applied
- Return type:
bool
- property has_solidangle_correction
True if a flat correction was applied
- Return type:
bool
- property metadata
Metadata associated with the input frame
- Return type:
JSON serializable dict object
- property method
return the name of the integration method _actually_ used, represented as a 4-tuple (dimention, splitting, algorithm, implementation)
- property method_called
return the name of the method called
- property normalization_factor
The normalisation factor used
- Return type:
float
- property npt_azim
for median filter along the azimuth, number of azimuthal bin initially used
- property percentile
for median filter along the azimuth, position of the centile retrieved
- property polarization_factor
The polarization factor used
- Return type:
float
- property poni
content of the PONI-file
- property sem
- property std
- property sum
Sum of all signal
- Return type:
numpy.ndarray
- property sum_normalization
Sum of all normalization information
- Return type:
numpy.ndarray
- property sum_normalization2
Sum of all normalization squared information
- Return type:
numpy.ndarray
- property sum_signal
Sum_signal information
- Return type:
numpy.ndarray
- property sum_variance
Sum of all variances information
- Return type:
numpy.ndarray
- property unit
Radial unit
- Return type:
string
- class pyFAI.containers.SeparateResult(bragg, amorphous)
Bases:
tuple
Class containing the result of AzimuthalIntegrator.separte which separates the
Amorphous isotropic signal (from a median filter or a sigma-clip)
Bragg peaks (signal > amorphous)
Shadow areas (signal < amorphous)
- __init__(bragg, amorphous)
- property amorphous
Contains the amorphous (isotropic) signal
- Return type:
numpy.ndarray
- property bragg
Contains the bragg peaks
- Return type:
numpy.ndarray
- property compute_engine
return the name of the compute engine, like CSR
- property count
Count information
- Return type:
numpy.ndarray
- property has_dark_correction
True if a dark correction was applied
- Return type:
bool
- property has_flat_correction
True if a flat correction was applied
- Return type:
bool
- property has_mask_applied
True if a mask was applied
- Return type:
bool
- property intensity
Regrouped intensity
- Return type:
numpy.ndarray
- property metadata
Metadata associated with the input frame
- Return type:
JSON serializable dict object
- property method
return the name of the integration method _actually_ used, represented as a 4-tuple (dimention, splitting, algorithm, implementation)
- property method_called
return the name of the method called
- property normalization_factor
The normalisation factor used
- Return type:
float
- property npt_azim
for median filter along the azimuth, number of azimuthal bin initially used
- property percentile
for median filter along the azimuth, position of the centile retrieved
- property polarization_factor
The polarization factor used
- Return type:
float
- property radial
Radial positions (q/2theta/r)
- Return type:
numpy.ndarray
- property shadow
Contains the shadowed (weak) signal part
- Return type:
numpy.ndarray
- property sigma
Error array if it was requested
- Return type:
numpy.ndarray, None
- property sum
Sum of all signal
- Return type:
numpy.ndarray
- property sum_normalization
Sum of all normalization information
- Return type:
numpy.ndarray
- property sum_signal
Sum_signal information
- Return type:
numpy.ndarray
- property sum_variance
Sum of all variances information
- Return type:
numpy.ndarray
- property unit
Radial unit
- Return type:
string
- class pyFAI.containers.SparseFrame(index, intensity)
Bases:
tuple
Result of the sparsification of a diffraction frame
- __init__(index, intensity)
- property background_avg
- property background_std
- property cutoff
- property cutoff_clip
- property cutoff_peak
- property cutoff_pick
- property dtype
- property dummy
- property error_model
- property index
Contains the index position of bragg peaks
- Return type:
numpy.ndarray
- property intensity
Contains the intensity of bragg peaks
- Return type:
numpy.ndarray
- property mask
Contains the mask used (encodes for the shape of the image as well)
- Return type:
numpy.ndarray
- property noise
- property peak_connected
- property peak_patch_size
- property peaks
- property radius
- property shape
- property unit
- property x
- property y
Other sub-packages:
- pyFAI.app package
- pyFAI.detectors package
- Module contents
ADSC_Q210
ADSC_Q270
ADSC_Q315
ADSC_Q4
Aarhus
Apex2
Basler
Cirpad
CylindricalDetector
Detector
Detector.API_VERSION
Detector.CORNERS
Detector.DELTA_DUMMY
Detector.DUMMY
Detector.HAVE_TAPER
Detector.IS_CONTIGUOUS
Detector.IS_FLAT
Detector.MANUFACTURER
Detector.ORIENTATION
Detector.__init__()
Detector.aliases
Detector.binning
Detector.calc_cartesian_positions()
Detector.calc_mask()
Detector.darkcurrent
Detector.delta_dummy
Detector.dummy
Detector.dynamic_mask()
Detector.factory()
Detector.flatfield
Detector.force_pixel
Detector.from_dict()
Detector.getFit2D()
Detector.getPyFAI()
Detector.get_binning()
Detector.get_config()
Detector.get_darkcurrent()
Detector.get_darkcurrent_crc()
Detector.get_flatfield()
Detector.get_flatfield_crc()
Detector.get_mask()
Detector.get_mask_crc()
Detector.get_maskfile()
Detector.get_name()
Detector.get_pixel1()
Detector.get_pixel2()
Detector.get_pixel_corners()
Detector.get_splineFile()
Detector.guess_binning()
Detector.mask
Detector.maskfile
Detector.name
Detector.orientation
Detector.pixel1
Detector.pixel2
Detector.registry
Detector.reset_pixel_corners()
Detector.save()
Detector.setFit2D()
Detector.setPyFAI()
Detector.set_binning()
Detector.set_config()
Detector.set_darkcurrent()
Detector.set_darkfiles()
Detector.set_dx()
Detector.set_dy()
Detector.set_flatfield()
Detector.set_flatfiles()
Detector.set_mask()
Detector.set_maskfile()
Detector.set_pixel1()
Detector.set_pixel2()
Detector.set_pixel_corners()
Detector.set_splineFile()
Detector.splineFile
Detector.uniform_pixel
Dexela2923
Eiger
Eiger16M
Eiger1M
Eiger2
Eiger2CdTe
Eiger2CdTe_16M
Eiger2CdTe_1M
Eiger2CdTe_1MW
Eiger2CdTe_2MW
Eiger2CdTe_4M
Eiger2CdTe_500k
Eiger2CdTe_9M
Eiger2_16M
Eiger2_1M
Eiger2_1MW
Eiger2_250k
Eiger2_2MW
Eiger2_4M
Eiger2_500k
Eiger2_9M
Eiger4M
Eiger500k
Eiger9M
FReLoN
Fairchild
HF_130K
HF_1M
HF_262k
HF_2M
HF_4M
HF_9M
HexDetector
ImXPadS10
ImXPadS10.BORDER_SIZE_RELATIVE
ImXPadS10.MANUFACTURER
ImXPadS10.MAX_SHAPE
ImXPadS10.MODULE_SIZE
ImXPadS10.PIXEL_SIZE
ImXPadS10.__init__()
ImXPadS10.aliases
ImXPadS10.calc_cartesian_positions()
ImXPadS10.calc_mask()
ImXPadS10.calc_pixels_edges()
ImXPadS10.force_pixel
ImXPadS10.get_config()
ImXPadS10.get_pixel_corners()
ImXPadS10.set_config()
ImXPadS10.uniform_pixel
ImXPadS140
ImXPadS70
ImXPadS70V
Jungfrau
Jungfrau4M
Jungfrau8M
Jungfrau_16M_cor
Lambda10M
Lambda250k
Lambda2M
Lambda60k
Lambda750k
Lambda7M5
Mar345
Mar555
Maxipix
Maxipix2x2
Maxipix5x1
Mythen
NexusDetector
Perkin
Pilatus
Pilatus100k
Pilatus1M
Pilatus200k
Pilatus2M
Pilatus300k
Pilatus300kw
Pilatus4
Pilatus4_1M
Pilatus4_260k
Pilatus4_260kw
Pilatus4_2M
Pilatus4_4M
Pilatus4_CdTe
Pilatus4_CdTe_1M
Pilatus4_CdTe_260k
Pilatus4_CdTe_260kw
Pilatus4_CdTe_2M
Pilatus4_CdTe_4M
Pilatus6M
Pilatus900k
PilatusCdTe
PilatusCdTe1M
PilatusCdTe2M
PilatusCdTe300k
PilatusCdTe300kw
PilatusCdTe900kw
Pixirad1
Pixirad2
Pixirad4
Pixirad8
Pixium
Rapid
RaspberryPi5M
RaspberryPi8M
Rayonix133
RayonixLx170
RayonixLx255
RayonixMx170
RayonixMx225
RayonixMx225hs
RayonixMx300
RayonixMx300hs
RayonixMx325
RayonixMx340hs
RayonixMx425hs
RayonixSx165
RayonixSx200
RayonixSx30hs
RayonixSx85hs
Titan
Xpad_flat
Xpad_flat.BORDER_PIXEL_SIZE_RELATIVE
Xpad_flat.IS_CONTIGUOUS
Xpad_flat.MAX_SHAPE
Xpad_flat.MODULE_GAP
Xpad_flat.MODULE_SIZE
Xpad_flat.PIXEL_SIZE
Xpad_flat.__init__()
Xpad_flat.aliases
Xpad_flat.calc_cartesian_positions()
Xpad_flat.calc_mask()
Xpad_flat.calc_pixels_edges()
Xpad_flat.force_pixel
Xpad_flat.get_pixel_corners()
Xpad_flat.uniform_pixel
detector_factory()
load()
- Module contents
- pyFAI.engines package
- pyFAI.ext package
- pyFAI.ext.bilinear module
- pyFAI.ext.fastcrc module
- pyFAI.ext.histogram module
- pyFAI.ext.inpainting module
- pyFAI.ext.invert_geometry module
- pyFAI.ext.morphology module
- pyFAI.ext.preproc module
- pyFAI.ext.reconstruct module
- pyFAI.ext.relabel module
- pyFAI.ext.sparse_builder module
- pyFAI.ext.sparse_utils module
ArrayBuilder
CSR_to_LUT()
CsrIntegrator
CsrIntegrator.__init__()
CsrIntegrator.data
CsrIntegrator.empty
CsrIntegrator.indices
CsrIntegrator.indptr
CsrIntegrator.input_size
CsrIntegrator.integrate()
CsrIntegrator.integrate_legacy()
CsrIntegrator.integrate_ng()
CsrIntegrator.nnz
CsrIntegrator.output_size
CsrIntegrator.preprocessed
CsrIntegrator.sigma_clip()
LUT_to_CSR()
LutIntegrator
Vector
calc_area()
clip()
recenter()
- pyFAI.ext.splitBBox module
- pyFAI.ext.splitBBoxCSR module
CsrIntegrator
CsrIntegrator.__init__()
CsrIntegrator.data
CsrIntegrator.empty
CsrIntegrator.indices
CsrIntegrator.indptr
CsrIntegrator.input_size
CsrIntegrator.integrate()
CsrIntegrator.integrate_legacy()
CsrIntegrator.integrate_ng()
CsrIntegrator.nnz
CsrIntegrator.output_size
CsrIntegrator.preprocessed
CsrIntegrator.sigma_clip()
HistoBBox1d
HistoBBox2d
calc_area()
clip()
recenter()
- pyFAI.ext.splitBBoxLUT module
- pyFAI.ext.splitPixel module
- pyFAI.ext.splitPixelFullCSR module
CsrIntegrator
CsrIntegrator.__init__()
CsrIntegrator.data
CsrIntegrator.empty
CsrIntegrator.indices
CsrIntegrator.indptr
CsrIntegrator.input_size
CsrIntegrator.integrate()
CsrIntegrator.integrate_legacy()
CsrIntegrator.integrate_ng()
CsrIntegrator.nnz
CsrIntegrator.output_size
CsrIntegrator.preprocessed
CsrIntegrator.sigma_clip()
FullSplitCSR_1d
FullSplitCSR_2d
calc_area()
clip()
recenter()
- pyFAI.ext.splitPixelFullLUT module
- pyFAI.ext.watershed module
Bilinear
InverseWatershed
InverseWatershed.NAME
InverseWatershed.VERSION
InverseWatershed.__init__()
InverseWatershed.init()
InverseWatershed.init_borders()
InverseWatershed.init_labels()
InverseWatershed.init_pass()
InverseWatershed.init_regions()
InverseWatershed.load()
InverseWatershed.merge_intense()
InverseWatershed.merge_singleton()
InverseWatershed.merge_twins()
InverseWatershed.peaks_from_area()
InverseWatershed.save()
Region
Region.border
Region.get_borders()
Region.get_highest_pass()
Region.get_index()
Region.get_maxi()
Region.get_mini()
Region.get_neighbors()
Region.get_pass_to()
Region.get_size()
Region.highest_pass
Region.index
Region.init_values()
Region.maxi
Region.merge()
Region.mini
Region.neighbors
Region.pass_to
Region.peaks
Region.size
- Module contents
- pyFAI.ext private package
ext._bispev
Moduleext._blob
Moduleext._convolution
Moduleext._distortion
ModuleDistortion
calc_CSR()
calc_LUT()
calc_area()
calc_pos()
calc_size()
calc_sparse()
calc_sparse_v2()
clip()
correct()
correct_CSR()
correct_CSR_double()
correct_CSR_kahan()
correct_CSR_preproc_double()
correct_LUT()
correct_LUT_double()
correct_LUT_kahan()
correct_LUT_preproc_double()
recenter()
resize_image_2D()
resize_image_3D()
uncorrect_CSR()
uncorrect_LUT()
ext._geometry
Moduleext._tree
ModuleTreeItem
TreeItem.__init__()
TreeItem.add_child()
TreeItem.children
TreeItem.extra
TreeItem.first()
TreeItem.get()
TreeItem.has_child()
TreeItem.label
TreeItem.last()
TreeItem.name
TreeItem.next()
TreeItem.order
TreeItem.parent
TreeItem.previous()
TreeItem.size
TreeItem.sort()
TreeItem.type
TreeItem.update()
- pyFAI.geometry package
- Module contents
- pyFAI.geometry.core module
Geometry
Geometry.__init__()
Geometry.array_from_unit()
Geometry.calc_pos_zyx()
Geometry.calc_transmission()
Geometry.calcfrom1d()
Geometry.calcfrom2d()
Geometry.center_array()
Geometry.check_chi_disc()
Geometry.chi()
Geometry.chiArray()
Geometry.chi_corner()
Geometry.chia
Geometry.cornerArray()
Geometry.cornerQArray()
Geometry.cornerRArray()
Geometry.cornerRd2Array()
Geometry.corner_array()
Geometry.correct_SA_spline
Geometry.cos_incidence()
Geometry.del_chia()
Geometry.del_dssa()
Geometry.del_qa()
Geometry.del_ra()
Geometry.del_ttha()
Geometry.delta2Theta()
Geometry.deltaChi()
Geometry.deltaQ()
Geometry.deltaR()
Geometry.deltaRd2()
Geometry.delta_array()
Geometry.diffSolidAngle()
Geometry.dist
Geometry.dssa
Geometry.energy
Geometry.getCXI()
Geometry.getFit2D()
Geometry.getImageD11()
Geometry.getPyFAI()
Geometry.getSPD()
Geometry.get_chia()
Geometry.get_config()
Geometry.get_correct_solid_angle_for_spline()
Geometry.get_dist()
Geometry.get_dssa()
Geometry.get_energy()
Geometry.get_mask()
Geometry.get_maskfile()
Geometry.get_parallax()
Geometry.get_pixel1()
Geometry.get_pixel2()
Geometry.get_poni1()
Geometry.get_poni2()
Geometry.get_qa()
Geometry.get_ra()
Geometry.get_rot1()
Geometry.get_rot2()
Geometry.get_rot3()
Geometry.get_shape()
Geometry.get_spline()
Geometry.get_splineFile()
Geometry.get_ttha()
Geometry.get_wavelength()
Geometry.guess_npt_rad()
Geometry.load()
Geometry.make_headers()
Geometry.mask
Geometry.maskfile
Geometry.normalize_azimuth_range()
Geometry.oversampleArray()
Geometry.parallax
Geometry.pixel1
Geometry.pixel2
Geometry.polarization()
Geometry.poni1
Geometry.poni2
Geometry.positionArray()
Geometry.position_array()
Geometry.qArray()
Geometry.qCornerFunct()
Geometry.qFunction()
Geometry.qa
Geometry.quaternion()
Geometry.rArray()
Geometry.rCornerFunct()
Geometry.rFunction()
Geometry.ra
Geometry.rd2Array()
Geometry.read()
Geometry.reset()
Geometry.rot1
Geometry.rot2
Geometry.rot3
Geometry.rotation_matrix()
Geometry.save()
Geometry.setCXI()
Geometry.setChiDiscAtPi()
Geometry.setChiDiscAtZero()
Geometry.setFit2D()
Geometry.setImageD11()
Geometry.setOversampling()
Geometry.setPyFAI()
Geometry.setSPD()
Geometry.set_chia()
Geometry.set_config()
Geometry.set_correct_solid_angle_for_spline()
Geometry.set_dist()
Geometry.set_dssa()
Geometry.set_energy()
Geometry.set_mask()
Geometry.set_maskfile()
Geometry.set_parallax()
Geometry.set_param()
Geometry.set_pixel1()
Geometry.set_pixel2()
Geometry.set_poni1()
Geometry.set_poni2()
Geometry.set_qa()
Geometry.set_ra()
Geometry.set_rot1()
Geometry.set_rot2()
Geometry.set_rot3()
Geometry.set_rot_from_quaternion()
Geometry.set_spline()
Geometry.set_splineFile()
Geometry.set_ttha()
Geometry.set_wavelength()
Geometry.sin_incidence()
Geometry.sload()
Geometry.solidAngleArray()
Geometry.spline
Geometry.splineFile
Geometry.tth()
Geometry.tth_corner()
Geometry.ttha
Geometry.twoThetaArray()
Geometry.wavelength
Geometry.write()
PolarizationArray
PolarizationDescription
- pyFAI.geometry.cxi module
- pyFAI.geometry.fit2d module
- pyFAI.gui package
- pyFAI.gui.cli_calibration module
AbstractCalibration
AbstractCalibration.PARAMETERS
AbstractCalibration.PTS_PER_DEG
AbstractCalibration.UNITS
AbstractCalibration.VALID_URL
AbstractCalibration.__init__()
AbstractCalibration.analyse_options()
AbstractCalibration.chiplot()
AbstractCalibration.configure_parser()
AbstractCalibration.extract_cpt()
AbstractCalibration.get_pixelSize()
AbstractCalibration.initgeoRef()
AbstractCalibration.postProcess()
AbstractCalibration.preprocess()
AbstractCalibration.prompt()
AbstractCalibration.read_dSpacingFile()
AbstractCalibration.read_pixelsSize()
AbstractCalibration.read_wavelength()
AbstractCalibration.refine()
AbstractCalibration.reset_geometry()
AbstractCalibration.set_data()
AbstractCalibration.validate_calibration()
AbstractCalibration.validate_center()
AbstractCalibration.win_error
Calibration
CheckCalib
CliCalibration
MultiCalib
Recalibration
calib()
get_detector()
- pyFAI.gui.jupyter module
- pyFAI.gui.matplotlib module
- pyFAI.gui.peak_picker module
PeakPicker
PeakPicker.VALID_METHODS
PeakPicker.__init__()
PeakPicker.append_mode
PeakPicker.closeGUI()
PeakPicker.contour()
PeakPicker.display_points()
PeakPicker.finish()
PeakPicker.gui()
PeakPicker.help
PeakPicker.init()
PeakPicker.load()
PeakPicker.massif_contour()
PeakPicker.on_minus_pts_clicked()
PeakPicker.on_plus_pts_clicked()
PeakPicker.onclick_append_1_point()
PeakPicker.onclick_append_more_points()
PeakPicker.onclick_erase_1_point()
PeakPicker.onclick_erase_grp()
PeakPicker.onclick_new_grp()
PeakPicker.onclick_option()
PeakPicker.onclick_refine()
PeakPicker.onclick_single_point()
PeakPicker.peaks_from_area()
PeakPicker.remove_grp()
PeakPicker.reset()
PeakPicker.sync_init()
preprocess_image()
- pyFAI.gui.cli_calibration module
- pyFAI.io package
- pyFAI.io.image module
- pyFAI.io.integration_config module
- pyFAI.io.nexus module
Nexus
Nexus.__init__()
Nexus.close()
Nexus.deep_copy()
Nexus.find_detector()
Nexus.flush()
Nexus.get_attr()
Nexus.get_class()
Nexus.get_data()
Nexus.get_dataset()
Nexus.get_default_NXdata()
Nexus.get_entries()
Nexus.get_entry()
Nexus.new_class()
Nexus.new_detector()
Nexus.new_entry()
Nexus.new_instrument()
from_isotime()
get_isotime()
is_hdf5()
load_nexus()
save_NXcansas()
save_NXmonpd()
- pyFAI.io.ponifile module
PoniFile
PoniFile.API_VERSION
PoniFile.__init__()
PoniFile.as_dict()
PoniFile.detector
PoniFile.dist
PoniFile.make_headers()
PoniFile.poni1
PoniFile.poni2
PoniFile.read_from_dict()
PoniFile.read_from_duck()
PoniFile.read_from_file()
PoniFile.read_from_geometryModel()
PoniFile.rot1
PoniFile.rot2
PoniFile.rot3
PoniFile.wavelength
PoniFile.write()
- pyFAI.io.sparse_frame module
- Module contents
- pyFAI.opencl package
- pyFAI.opencl.azim_csr module
OCL_CSR_Integrator
OCL_CSR_Integrator.BLOCK_SIZE
OCL_CSR_Integrator.__init__()
OCL_CSR_Integrator.buffers
OCL_CSR_Integrator.check_mask
OCL_CSR_Integrator.checksum
OCL_CSR_Integrator.compile_kernels()
OCL_CSR_Integrator.guess_workgroup_size()
OCL_CSR_Integrator.integrate()
OCL_CSR_Integrator.integrate_legacy()
OCL_CSR_Integrator.integrate_ng()
OCL_CSR_Integrator.kernel_files
OCL_CSR_Integrator.mapping
OCL_CSR_Integrator.send_buffer()
OCL_CSR_Integrator.set_kernel_arguments()
OCL_CSR_Integrator.sigma_clip()
- pyFAI.opencl.azim_hist module
- pyFAI.opencl.azim_lut module
OCL_LUT_Integrator
OCL_LUT_Integrator.BLOCK_SIZE
OCL_LUT_Integrator.__init__()
OCL_LUT_Integrator.buffers
OCL_LUT_Integrator.check_mask
OCL_LUT_Integrator.checksum
OCL_LUT_Integrator.compile_kernels()
OCL_LUT_Integrator.integrate()
OCL_LUT_Integrator.integrate_legacy()
OCL_LUT_Integrator.integrate_ng()
OCL_LUT_Integrator.kernel_files
OCL_LUT_Integrator.mapping
OCL_LUT_Integrator.send_buffer()
OCL_LUT_Integrator.set_kernel_arguments()
- pyFAI.opencl.preproc module
- pyFAI.opencl.sort module
Separator
Separator.DUMMY
Separator.__init__()
Separator.allocate_buffers()
Separator.filter_horizontal()
Separator.filter_vertical()
Separator.kernel_files
Separator.mean_std_horizontal()
Separator.mean_std_vertical()
Separator.set_kernel_arguments()
Separator.sigma_clip_horizontal()
Separator.sigma_clip_vertical()
Separator.sort_horizontal()
Separator.sort_vertical()
Separator.trimmed_mean_horizontal()
Separator.trimmed_mean_vertical()
- Module contents
- pyFAI.opencl.azim_csr module
- pyFAI.resources package
- pyFAI.utils package
- pyFAI.utils.bayes module
BayesianBackground
BayesianBackground.PREFACTOR
BayesianBackground.__init__()
BayesianBackground.background_image()
BayesianBackground.bayes_llk()
BayesianBackground.bayes_llk_large()
BayesianBackground.bayes_llk_negative()
BayesianBackground.bayes_llk_small()
BayesianBackground.classinit()
BayesianBackground.func2d_min()
BayesianBackground.func_min()
BayesianBackground.s1
BayesianBackground.spline
BayesianBackground.test_bayes_llk()
- pyFAI.utils.decorators module
- pyFAI.utils.ellipse module
- pyFAI.utils.header_utils module
- pyFAI.utils.logging_utils module
- pyFAI.utils.mathutil module
LongestRunOfHeads
binning()
center_of_mass()
chi_square()
cormap()
deg2rad()
dog()
dog_filter()
expand()
expand2d()
gaussian()
gaussian_filter()
interp_filter()
is_far_from_group()
maximum_position()
measure_offset()
relabel()
round_fft()
roundfft()
rwp()
shift()
shiftFFT()
shift_fft()
unBinning()
unbinning()
- pyFAI.utils.orderedset module
- pyFAI.utils.shell module
- pyFAI.utils.stringutil module
- Module contents
- pyFAI.utils.bayes module