integrator.integrator module
integrator.integrator module#
- class integrator.integrator.AIConfiguration(n_pts: int, unit: str, polarization_factor: float, correct_solid_angle: bool, detector: str, poni_file: str, mask_file: str, error_model: str, azimuthal_range: tuple, radial_range: tuple, ai_method: str, trim_method: str = None, trim_n_pts: int = 0, trim_bounds: tuple = None)#
Bases:
object
- n_pts: int#
- unit: str#
- polarization_factor: float#
- correct_solid_angle: bool#
- detector: str#
- poni_file: str#
- mask_file: str#
- error_model: str#
- azimuthal_range: tuple#
- radial_range: tuple#
- ai_method: str#
- trim_method: str = None#
- trim_n_pts: int = 0#
- trim_bounds: tuple = None#
- integrator.integrator.create_azimuthal_integrator(ai_config)#
Create a AzimuthalIntegrator object from a configuration.
- Parameters
ai_config (AIConfiguration) – Data structure describing the azimuthal integration configuration
- Returns
azimuthal_integrator – Azimuthal integrator instance
- Return type
pyFAI.azimuthalIntegrator.AzimuthalIntegrator
- class integrator.integrator.StackIntegrator(ai_config, dataset=None, output_dir=None, stack_size=None, logger=None, existing_output='skip', do_stack_mean=False, extra_options=None)#
Bases:
object
Initialize an Integrator object.
- Parameters
ai_config (AIConfiguration) – Azimuthal Integration configuration
stack_size (int) – Number of images to process at once
dataset (HDF5Dataset, optional) – XRD-CT dataset. If not set, the method set_new_dataset() will have to be called prior to process_stack()
output_dir (str, optional) – Path to the directory where the files are written.. If provided, a file output_xxxx.h5 is created where “xxxx” is the index of current stack. Otherwise, results are not written on disk.
logger (Logger, optional) – Logger object. If not provided, messages are shown with the print() function
existing_output (str, optional) –
What do do in the case where the output file already exists:
”raise”: raise an error
”overwrite”: overwrite the existing file
”skip”: do not process the current stack
extra_options (dict, optional) –
- Dictionary of advanced options. Current values are:
- ”scan_num_as_h5_entry”: False
Whether to use the current scan number as HDF5 entry.
- set_new_dataset(dataset, output_dir, stack_size, existing_output='skip')#
Configure a new dataset to process.
- set_stacksize(stack_size)#
- initialize_ai_engine(ai_config=None)#
Initialize a pyFAI engine based on a AIConfiguration object.
- get_writer_configuration()#
- get_pyfai_configuration()#
- property result#
- integrate_image(image)#
- process_stack(start_idx, end_idx)#
- process_full_dataset()#
- create_output_masterfile(files, output_file)#
- create_legacy_output_file(files, output_file)#