Performing distributed AI on a dataset#

This page explains how to distribute azimuthal integration of a dataset.

You first have to create a configuration file which describes how the azimuthal integration should be performed.

Then, you’ll have to use the integrate-slurm command to launch the distributed azimuthal integration.

Run distributed AI on the SLURM cluster#

The [computations distribution] section of configuration file describes how the processing is distributed among cluster machines.

By default:

[computations distribution]

partition = gpu
n_workers = 4
cores_per_worker = 4
ai_engines_per_worker = 8

This will launch 4 jobs, each launching 8 pyFAI engines. The number of (CPU) cores per worker is 4 for multi-threaded LZ4 decompression.

Run distributed AI on the local machine#

This is not supported anymore. For single-machine integration, see https://gitlab.esrf.fr/steche/intrange