integrator.base module
integrator.base module#
- class integrator.base.ClusterConfiguration(workers: int = 1, cores: int = 8, processes: int = 1, memory: str = '100GB', project: str = 'azimuthal_integration', walltime: str = '01:00:00', queue: str = 'nice', scheduler_options: Any = None, python: str = '/home/pierre/.venv/python39/bin/python3', job_extra: Any = None)#
Bases:
object
- workers: int = 1#
- cores: int = 8#
- processes: int = 1#
- memory: str = '100GB'#
- project: str = 'azimuthal_integration'#
- walltime: str = '01:00:00'#
- queue: str = 'nice'#
- scheduler_options: Any = None#
- python: str = '/home/pierre/.venv/python39/bin/python3'#
- job_extra: Any = None#
- class integrator.base.DistributedIntegrator(ai_config, resources, dataset=None, output_file=None, n_images_per_integrator='auto', logger=None, extra_options=None, **integrator_kwargs)#
Bases:
object
A base class for distributing StackIntegrator.
Initialize a DistributedStackIntegrator.
- Parameters
ai_config (AIConfiguration) – Azimuthal Integration configuration
processing_resources (any) – Data structure describing computational resources
dataset (DatasetParser, optional) – XRD dataset information object. If not provided, set_new_dataset() will have to be called prior to integrate_dataset().
output_file (str, optional) – Path where the integrated data will be stored (hdf5 file)
n_images_per_integrator (int, optional) – Number of images to process at each stack. By default it is automatically inferred by inspecting the dataset file
logger (Logger) – Logger object
extra_options (dict, optional) –
- Dictionary of advanced options. Current values are:
- ”create_h5_subfolders”: True
Whether to create a sub-directory to store files that contain the result of each integrated stack
- ”scan_num_as_h5_entry”: False
Whether to use the current scan number as HDF5 entry.
class. (The other named arguments (**kwargs) are passed to the StackIntegrator) –
- set_new_dataset(dataset, output_file)#