Using nabu from python to reconstruct a dataset with GPU

This notebook shows how to use the Nabu software for performing a basic reconstruction of a tomography dataset.
The computations are done on a local machine with a GPU and Cuda available.

This tutorial goes a bit further than nabu_basic_reconstruction.ipynb: - GPU implementation of each component is used - We see how to start from a configuration file and devise a simple processing chain accordingly

The same dataset is used (binned scan of a bamboo stick, thanks Ludovic Broche, ESRF ID19).

1 - Load the dataset informations

We must provide nabu with the the configuration file (nabu.conf), describing the path to the dataset and the processing steps. This is the equivalent of the .par file in PyHST2. In this file, no information is given on the detector size, energy, distance, etc: these informations are extracted from the dataset metadata.

[1]:
import os
from nabu.testutils import utilstest, get_file
from nabu.pipeline.fullfield.processconfig import ProcessConfig
[2]:
print("Getting dataset (downloading if necessary) ...")
data_path = get_file("bamboo_reduced.nx")
print("... OK")

# Get the configuration file of this dataset
conf_fname = get_file("bamboo_reduced.conf")

# Change directory to the path where the data is located (only useful for this tutorial)
os.chdir(utilstest.data_home)

# Parse this configuration file
conf = ProcessConfig(conf_fname)
Getting dataset (downloading if necessary) ...
... OK
Browsing dataset
Updating dataset information with user configuration
WARNING:tomoscan.DEPRECATION:Function get_nexus_paths is deprecated since tomoscan version 0.8.0. Reason: moved. Use 'nxtomo.paths.nxtomo' instead.
  File "/home/pierre/.venv/py311/lib/python3.11/site-packages/tomoscan/esrf/scan/nxtomoscan.py", line 1073, in x_rotation_axis_pixel_position
    nexus_paths = get_nexus_paths(self.nexus_version)
Saved reduced darks/flats to /tmp/nabu_testdata_pierre/bamboo_reduced_flats.hdf5
Doing dataset estimations
Could not get an initial estimate for center of rotation in data file
Estimating center of rotation
CenterOfRotationSlidingWindow.find_shift({'side': 'center', 'window_width': None, 'roi_yxhw': None, 'median_filt_shape': None, 'peak_fit_radius': 1, 'high_pass': None, 'low_pass': None, 'return_validity': False, 'return_relative_to_middle': False})
Estimated center of rotation: 338.986
Doing coupled validation
Cannot do SRCurrent normalization: missing flats and/or projections SRCurrent

Note that ProcessConfig will do quite a few things under the hood: - Parse the configuration file and check parameters correctness - Browse the dataset - Get or compute the reduced flats/darks - Estimate the center of rotation

The resulting object contains all necessary information to process the dataset.

[3]:
# We can easily get information on the processing steps.
nabu_config = conf.nabu_config
from pprint import pprint
pprint(nabu_config)
# The same can be done with the dataset structure
dataset_info = conf.dataset_info
# print([getattr(dataset_info, attr) for attr in ["energy", "distance", "n_angles", "radio_dims"]])
{'about': {},
 'dataset': {'binning': 1,
             'binning_z': 1,
             'darks_flats_dir': None,
             'exclude_projections': None,
             'hdf5_entry': None,
             'location': '/tmp/nabu_testdata_pierre/bamboo_reduced.nx',
             'nexus_version': 1.0,
             'overwrite_metadata': '',
             'projections_subsampling': (1, 0)},
 'output': {'file_format': 'hdf5',
            'file_prefix': 'bamboo_reduced_rec',
            'float_clip_values': None,
            'jpeg2000_compression_ratio': None,
            'location': '/tmp/nabu_testdata_pierre',
            'overwrite_results': True,
            'tiff_single_file': False},
 'phase': {'ctf_advanced_params': 'length_scale=1e-5; lim1=1e-5; lim2=0.2; '
                                  'normalize_by_mean=True',
           'ctf_geometry': 'z1_v=None; z1_h=None; detec_pixel_size=None; '
                           'magnification=True',
           'delta_beta': 100.0,
           'method': 'paganin',
           'padding_type': 'edge',
           'unsharp_coeff': 0.0,
           'unsharp_method': 'gaussian',
           'unsharp_sigma': 0.0},
 'pipeline': {'resume_from_step': None,
              'save_steps': None,
              'steps_file': None,
              'verbosity': 'info'},
 'postproc': {'histogram_bins': 1000000, 'output_histogram': False},
 'preproc': {'autotilt_options': None,
             'ccd_filter_enabled': False,
             'ccd_filter_threshold': 0.04,
             'detector_distortion_correction': None,
             'detector_distortion_correction_options': None,
             'dff_sigma': None,
             'double_flatfield_enabled': False,
             'flat_distortion_correction_enabled': False,
             'flat_distortion_params': 'tile_size=100; '
                                       "interpolation_kind='linear'; "
                                       "padding_mode='edge'; "
                                       'correction_spike_threshold=None',
             'flatfield': True,
             'log_max_clip': 10.0,
             'log_min_clip': 1e-06,
             'normalize_srcurrent': True,
             'processes_file': None,
             'rotate_projections_center': None,
             'sino_normalization': None,
             'sino_normalization_file': '',
             'sino_rings_correction': 'munch',
             'sino_rings_options': None,
             'take_logarithm': True,
             'tilt_correction': None},
 'reconstruction': {'angle_offset': 0.0,
                    'angles_file': None,
                    'axis_correction_file': None,
                    'centered_axis': False,
                    'clip_outer_circle': False,
                    'cor_options': "side='from_file'",
                    'cor_slice': None,
                    'enable_halftomo': 'auto',
                    'end_x': -1,
                    'end_y': -1,
                    'end_z': -1,
                    'fbp_filter_cutoff': 1.0,
                    'fbp_filter_type': 'ramlak',
                    'hbp_legs': 4,
                    'hbp_reduction_steps': 2,
                    'implementation': None,
                    'iterations': 200,
                    'method': 'FBP',
                    'optim_algorithm': 'chambolle-pock',
                    'outer_circle_value': 0.0,
                    'padding_type': 'edge',
                    'positivity_constraint': True,
                    'preconditioning_filter': True,
                    'rotation_axis_position': 'sliding-window',
                    'sample_detector_dist': None,
                    'source_sample_dist': None,
                    'start_x': 0,
                    'start_y': 0,
                    'start_z': 0,
                    'translation_movements_file': None,
                    'weight_tv': 0.01},
 'resources': {'cpu_workers': 0,
               'gpu_id': [],
               'gpus': 1,
               'memory_per_node': (90.0, True),
               'method': 'local',
               'queue': 'gpu',
               'threads_per_node': (100.0, True),
               'walltime': (1, 0, 0)}}

2 - Chunk processing

Nabu processes data by chunks of radios (see the documentation for more explanations).
In a first step, we define how to read chunks of radios.
[4]:
from nabu.io.reader import NXTomoReader
What is the largest chunk size we can process ?
The answer is given by inspecting the current GPU memory, and the processing steps.
[5]:
from nabu.cuda.utils import get_gpu_memory
from nabu.pipeline.fullfield.computations import estimate_max_chunk_size
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:63: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
  warn("Unable to import recommended hash 'siphash24.siphash13', "
[6]:
chunk_size = estimate_max_chunk_size(
    get_gpu_memory(0),
    conf
)
print("Chunk_size = %d" % chunk_size)
Chunk_size = 540
[7]:
# Load the first 'chunk_size' lines of all the radios
# i.e do projections_data[:, 0:chunk_size, :]
sub_region = (
    slice(None),
    slice(0, chunk_size),
    slice(None)
)
projections_reader = NXTomoReader(
    data_path,
    sub_region=sub_region,
)
[8]:
# Load the current chunk
projections = projections_reader.load_data() # takes some time
[9]:
print(projections.shape)
print(projections.dtype)
(1000, 540, 640)
float32

3 - Initialize the GPU

Most of the processing can be done on GPU (or many-core CPU if using OpenCL).
With pycuda.gpuarray (or its OpenCL counterpart pyopencl.array), we manipulate array objects with memory residing on device. This allows to avoid extraneous host <-> device copies.
[10]:
import pycuda.gpuarray as garray
from nabu.cuda.utils import get_cuda_context
import numpy as np
[11]:
# Create a Cuda context on device ID 0
# By default, all following GPU processings will be bound on this context
ctx = get_cuda_context(device_id=0)
[12]:
n_angles, n_z, n_x = projections.shape
# transfer the chunk on GPU
d_radios = garray.to_gpu(projections)

4 - Pre-processing

Pre-processing utilities are available in the nabu.preproc module.
Utilities available with the cuda backend are implemented in a module with a _cuda suffix.

4.1 - Flat-field

[13]:
from nabu.preproc.flatfield_cuda import CudaFlatField
[14]:
radios_indices = sorted(conf.dataset_info.projections.keys())
# Configure the `FlatField` processor
cuda_flatfield = CudaFlatField(
    d_radios.shape,
    dataset_info.get_reduced_flats(sub_region=sub_region),
    dataset_info.get_reduced_darks(sub_region=sub_region),
    radios_indices=radios_indices,
)
[15]:
# Perform the normalization on GPU
if nabu_config["preproc"]["flatfield"]:
    print("Doing flat-field")
    cuda_flatfield.normalize_radios(d_radios)
Doing flat-field

4.2 - Phase retrieval

[16]:
from nabu.preproc.phase_cuda import CudaPaganinPhaseRetrieval
[17]:
energy = dataset_info.energy
# Phase retrieval is done on each radio individually, with the sub-region specified above
if (nabu_config["phase"]["method"] or "").lower() == "paganin":
    print("Doing phase retrieval")
    cudapaganin = CudaPaganinPhaseRetrieval(
        (n_z, n_x),
        distance=dataset_info.distance,
        energy=energy,
        delta_beta=nabu_config["phase"]["delta_beta"],
        pixel_size=dataset_info.pixel_size * 1e6,
    )
    for i in range(n_angles):
        cudapaganin.apply_filter(d_radios[i], output=d_radios[i])
Doing phase retrieval
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:63: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
  warn("Unable to import recommended hash 'siphash24.siphash13', "
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:63: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
  warn("Unable to import recommended hash 'siphash24.siphash13', "
/home/pierre/.venv/py311/lib/python3.11/site-packages/skcuda/cublas.py:284: UserWarning: creating CUBLAS context to get version number
  warnings.warn('creating CUBLAS context to get version number')

4.3 - Logarithm

[18]:
from nabu.preproc.ccd_cuda import CudaLog
[19]:
if nabu_config["preproc"]["take_logarithm"]:
    print("Taking logarithm")
    cuda_log = CudaLog(d_radios.shape, clip_min=0.01)
    cuda_log.take_logarithm(d_radios)
Taking logarithm

5 - Reconstruction

We use the filtered backprojection with nabu.reconstruction.fbp

[20]:
from nabu.reconstruction.fbp import Backprojector
[21]:
rec_options = conf.processing_options["reconstruction"]
B = Backprojector(
    (n_angles, n_x),
    angles=rec_options["angles"],
    rot_center=rec_options["rotation_axis_position"],
    padding_mode="edges",
    # extra_options={"use_textures": False}
)
d_recs = garray.zeros((n_z, n_x, n_x), "f")
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:63: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
  warn("Unable to import recommended hash 'siphash24.siphash13', "
/home/pierre/.venv/py311/lib/python3.11/site-packages/pytools/persistent_dict.py:63: RecommendedHashNotFoundWarning: Unable to import recommended hash 'siphash24.siphash13', falling back to 'hashlib.sha256'. Run 'python3 -m pip install siphash24' to install the recommended hash.
  warn("Unable to import recommended hash 'siphash24.siphash13', "
/home/pierre/.venv/py311/lib/python3.11/site-packages/skcuda/cublas.py:284: UserWarning: creating CUBLAS context to get version number
  warnings.warn('creating CUBLAS context to get version number')
[22]:
print("Reconstructing...", end="")
for i in range(n_z):
    B.fbp(d_radios[:, i, :], output=d_recs[i])
recs = d_recs.get()
print(" ... OK")
Reconstructing... ... OK

6 - Visualize

[23]:
%pylab nbagg
%pylab is deprecated, use %matplotlib inline and import the required libraries.
Populating the interactive namespace from numpy and matplotlib
[24]:
figure()
imshow(recs[0], cmap="gray")
[24]:
<matplotlib.image.AxesImage at 0x7fbcc27cc790>
[ ]: