Configuration parameters#

This file lists all the current configuration parameters available in the configuration file.

dataset#

Dataset(s) to integrate. This can be a comma-separated list, with wildcards. The HDF5-Nexus files have to contain entries like ‘1.1’.

location = 

Entry in the HDF5 file, eg. ‘1.1’. Default (empty) means that ALL the entries will be processed.

hdf5_entry = 

Which folders layout to use for parsing datasets and creating output files. Available are: ID15A, ID11

layout = ID15A

azimuthal integration#

Number of azimuthal bins.

n_pts = 2500

Radial unit. Can be r_mm, q_A^-1, 2th_deg

unit = r_mm

Detector name (ex. eiger2_cdte_4m), or path to the a detector file specification (ex. id15_pilatus.h5)

detector = 

Force detector name if not present in the ‘detector’ file description

detector_name = 

Path to the mask file

mask_file = 

Path to the flatfield file. If provided, flat-field normalization will be applied.

flatfield_file = 

Path to the dark file. If provided, dark current will be subtracted from the raw data.

dark_file = 

Path to the pyFAI calibration file

poni_file = 

Error model for azimuthal integration. Can be poisson or None.

error_model = poisson

azimuthal ranges in the form (min, max, n_slices)

azimuthal_range = (-180., 180., 1)

Lower and upper range of the radial unit. If not provided, range is simply (data.min(), data.max()). Values outside the range are ignored.

radial_range = 

Which azimuthal integration method to use.

ai_method = opencl

Polarization factor for PyFAI. Default is 1.0

polarization_factor = 1.0

Whether to correct solid angle. Put this parameter if you wish to correct solid angle.

correct_solid_angle = 0

Whether to additionally compute the mean/median of the integrated stacks. Can be 0 (disabled), ‘mean’ or ‘median’.

average_xy = 0

Whether to perform pixel splitting. Possible values are: no (or 0), BBox, pseudo, full

pixel_splitting = 0

Which outliers removal method to use. Can be none (default), median, or sigma clip. NB: when using one of the latter two, neither azimuthal caking nor uncertainty estimation are possible, only one azimuthal slice is used.

trim_method = 

Number of azimuthal bins for trimmed mean.

trim_n_pts = 

Bounds for trim methods:

  • For median: percentiles in the form (cut_low, cut_high). Integration uses medfilt1d with only one azimuthal slice.

  • sigma clip: keep only pixels with intensity |I - mean(I)| < thres * std(I).

trim_bounds = 

computations distribution#

Name of the SLURM partition (queue)

partition = gpu

Number of workers to use. If partition != local, it corresponds to the number of SLURM jobs submitted.

n_workers = 4

Number of CPU cores (threads) per workers to use, mostly for LZ4 decompression of data.

cores_per_worker = 4

Number of AI engines per worker. Each AI engine is spawned in a process with ‘cores_per_worker’ threads.

ai_engines_per_worker = 8

Time limit for SLURM job duration

time = 01:00:00

Amount of memory per worker. Default is 100 GB.

memory_per_worker = 100GB

Defines a python executable to use for each SLURM partition. Mind the semicolon (;) as a separator.

python_executables = nice='/scisoft/tomotools/integrator/x86_64/2024.1.0/bin/python' ; p9gpu='/scisoft/tomotools/integrator/ppc64le/2024.1.0/bin/python' ; p9gpu-long='/scisoft/tomotools/integrator/ppc64le/2024.1.0/bin/python' ; gpu='/scisoft/tomotools/integrator/x86_64/2024.1.0/bin/python'

path to the ‘worker space’, a directory where integrator stores files useful for distribution/communication with workers. Empty means in the same directory as the configuration file.

workspace_path = 

output#

Path to the output file. If not provided, it will be in the same directory as the input dataset. NOTA: a directory with the same name (less the extension) will be created at the same level, for storing the actual integrated data.

location = 

What to do if output already exists. Possible values are:

  • skip: go to next image of the output already exists (and has the same processing configuration)

  • reprocess_if_conffile_more_recent: re-do the processing if the configuration file was edited after the integration file

  • overwrite: re-do the processing, overwrite the file

  • raise: raise an error and exit

existing_output = reprocess_if_conffile_more_recent

Whether to repack output data, meaning transform the virtual datasets to a contiguous dataset. If activated, the partial result files are deleted.

repack_output = 1

Subfolder where the partial integration files (one per acquisition file) should be saved. If ‘repack_output’ is set to 1, these partial files should disappear at the end of the processing.

partial_files_subfolder = 

Which file mode to use when creating new files and directories. The value must be a octal number like provided to the ‘chmod’ command, eg. 775, 777, 755, …

file_mode = 775

Which metadata (eg. motors positions, diodes readings) should be propagated into the output file. This should be a list of comma-separated values.

try_metadata = 

pipeline#

Level of verbosity of the processing. 0 = terse, 3 = much information.

verbosity = 2