integrator.distributed_integration module
integrator.distributed_integration module#
- integrator.distributed_integration.actor_process_stack(start_idx, end_idx)#
- integrator.distributed_integration.actor_get_attribute(attr_name)#
- integrator.distributed_integration.actor_generic_method(function_name, *args, **kwargs)#
- integrator.distributed_integration.get_actors()#
- class integrator.distributed_integration.DistributedStackIntegrator(ai_config, resources, dataset=None, output_file=None, n_images_per_integrator='auto', logger=None, extra_options=None, **integrator_kwargs)#
Bases:
integrator.base.DistributedIntegrator
A class for distributing StackIntegrator over SLURM.
Initialize a DistributedStackIntegrator.
- Parameters
ai_config (AIConfiguration) – Azimuthal Integration configuration
processing_resources (any) – Data structure describing computational resources
dataset (DatasetParser, optional) – XRD dataset information object. If not provided, set_new_dataset() will have to be called prior to integrate_dataset().
output_file (str, optional) – Path where the integrated data will be stored (hdf5 file)
n_images_per_integrator (int, optional) – Number of images to process at each stack. By default it is automatically inferred by inspecting the dataset file
logger (Logger) – Logger object
extra_options (dict, optional) –
- Dictionary of advanced options. Current values are:
- ”create_h5_subfolders”: True
Whether to create a sub-directory to store files that contain the result of each integrated stack
- ”scan_num_as_h5_entry”: False
Whether to use the current scan number as HDF5 entry.
class. (The other named arguments (**kwargs) are passed to the StackIntegrator) –
- get_workers()#
- workers_are_alive()#
Return True if the computation resources are still available, False otherwise.
- set_new_dataset(dataset, output_file, timeout=None)#
- distribute_integration_tasks(n_images=None, check_workers=False)#
- integrate_dataset(timeout=None, retry_failed_tasks=0)#
- get_eta()#
- create_output_files()#
- class integrator.distributed_integration.LocalDistributedStackIntegrator(ai_config, resources, dataset=None, output_file=None, n_images_per_integrator='auto', logger=None, extra_options=None, **integrator_kwargs)#
Bases:
integrator.distributed_integration.DistributedStackIntegrator
A class for distributing stack integration on the local machine
Initialize a DistributedStackIntegrator.
- Parameters
ai_config (AIConfiguration) – Azimuthal Integration configuration
processing_resources (any) – Data structure describing computational resources
dataset (DatasetParser, optional) – XRD dataset information object. If not provided, set_new_dataset() will have to be called prior to integrate_dataset().
output_file (str, optional) – Path where the integrated data will be stored (hdf5 file)
n_images_per_integrator (int, optional) – Number of images to process at each stack. By default it is automatically inferred by inspecting the dataset file
logger (Logger) – Logger object
extra_options (dict, optional) –
- Dictionary of advanced options. Current values are:
- ”create_h5_subfolders”: True
Whether to create a sub-directory to store files that contain the result of each integrated stack
- ”scan_num_as_h5_entry”: False
Whether to use the current scan number as HDF5 entry.
class. (The other named arguments (**kwargs) are passed to the StackIntegrator) –