integrator.writer module#

integrator.writer.collect_results_from_files(out_files)#
integrator.writer.create_output_file_id15a(scan, partial_output_files, ai_config, output_file, delete_partial_files=True, use_virtual_dataset=False, extra_metadata=None)#

Create the final output file, with ESRF ID15A layout.

Parameters:
  • scan (HDF5Dataset object) – Data structure with information on scan.

  • partial_output_files (list of str) – List of paths of partial output files

  • ai_config (AIConfiguration) – Data structure with azimuthal integration configuration

  • output_file (str or dict) – Path to the final output file. Will be overwritten if already exists. If a dict is provided, the output file is output_file[scan.dataset_hdf5_url.path()]

integrator.writer.create_output_file(scan, partial_output_files, ai_config, output_file, layout, extra_metadata=None)#
integrator.writer.watch_scans_and_create_output_files(scans_info, ai_config, layout, period=0.5, logger=None, extra_metadata=None, repack_output=False, repack_output_options=None)#
integrator.writer.submit_repack_output(out_file, average_xy=None, python_env=None, slurm_resources=None, log_dir=None, workspace_dir=None)#
integrator.writer.repack_output_file(output_file, average_xy=None)#

Transform a output file containing virtual dataset, to a output file containing all the data. This was written for ID15A where using too many files (partial results) was problematic for data archival.