In order to perform the flat field correction (optional in nabu) an acquisition must contains reduced
dark and flats.
Those reduced
darks and flats comme from raw frames 'darks' and 'flats' frames.
In general we expect those frames to be part of the NXtomo and call the dark and flat field construction
widget to generate the reduced one.
This is usualy the first processing to run. This way the flat field correction can be done and is useful and / or required by many processes
By default the reduced
dark(s) is obtained by computing the mean
of raw dark frames and the reduced
flat(s) is obtained by computing the median
of the flat(s)
from IPython.display import Video
Video("video/reduced_darks_flats_widget.mp4", embed=True, height=500)
{dataset_prefix}_darks.hdf5
and {dataset_prefix}_flats.hdf5
files.Here is a screenshot of a reduced_flats file containing a single serie of flat at the beginning
In some cases it might happen that users need to reuse reduced darks or flats.
If the reduced darks and flats already exists then you can simply use the 'copy' from the reduced dark and flat widget
!!! The copy option is activated by default !!!
auto
mode: Each time it mets a dataset with reduced dark / flat it will keep them in cache. And when it mets a dataset with dark / flat missing it will copy them to it.manual
mode: user can provide an URL to well formed reduced darks and reduced flats HDF5 datasetnote: the darks and flats cache file is provided at the bottom of the widget. This can be a good to check the registration goes as expected.
You can also create those following the tomoscan python API like
from tomwer.core.scan.hdf5scan import HDF5TomoScan
# from tomwer.core.scan.edfscan import EDFTomoScan
# same API for EDF or HDF5
scan = HDF5TomoScan(file_path, data_path)
darks = {
0: numpy.array(...), # darks at start
}
flats = {
0: numpy.array(...), # flats at start
3000: numpy.array(...), # flats at end
}
scan.save_reduced_darks(darks)
scan.save_reduced_flats(flats)
from tomoscan.esrf.scan.utils import copy_darks_to, copy_flats_to
# create darks and flats as numpy array
darks = {
0: numpy.ones((100, 100), dtype=numpy.float32),
}
flats = {
1: numpy.ones((100, 100), dtype=numpy.float32) * 2.0,
100: numpy.ones((100, 100), dtype=numpy.float32) * 2.0,
}
original_dark_flat_file = os.path.join(tmp_path, "originals.hdf5")
dicttoh5(darks, h5file=original_dark_flat_file, h5path="darks", mode="a")
dicttoh5(flats, h5file=original_dark_flat_file, h5path="flats", mode="a")
# create darks and flats URL
darks_url = DataUrl(
file_path=original_dark_flat_file,
data_path="/darks",
scheme="silx",
)
flats_url = DataUrl(
file_path=original_dark_flat_file,
data_path="/flats",
scheme="silx",
)
# apply the copy
scan = HDF5TomoScan(...)
copy_flats_to(scan=scan, flats_url=flats_url, save=True)
copy_flats_to(scan=scan, flats_url=flats_url, save=True)
It can happen that sometime you have a bliss dataset containing projections that must be used as flats in order to compute the reduced flats (dataset 1). And that this reduced flats must be used by other datasets (datasets 2*).
In this case you can do the following actions:
image-key-editor
or image-key-upgrader
widget.reduced dark and flat
widget ).reduced flats
input as show in the video)reduced dark and flat
widget (reduced flat
have been set during pre processing.default center of rotation
, nabu slice
and data viewer
on the video)from IPython.display import YouTubeVideo
YouTubeVideo("vJOo0rHHUYk", height=500, width=800)
Note: if the processing is done before the flat copy is done or if this one fails then flat field will fail. And you might encounter the following error:
2023-03-06 16:04:45,967 [ERROR] cannot make flat field correction, flat not found [tomwer.core.scan.scanbase](scanbase.py:177)
2023-03-06 16:04:45,967:ERROR:tomwer.core.scan.scanbase: cannot make flat field correction, flat not found
The /scisoft/tomo_training/part3_flatfield/WGN_01_0000_P_110_8128_D_129/
contains three NXtomo.
Use the first one (WGN_01_0000_P_110_8128_D_129_0000.nx) projections as flats to compute reduced flats.
Then provide this reduced flat to compute one of the two other dataset (WGN_01_0000_P_110_8128_D_129_0001.nx
or WGN_01_0000_P_110_8128_D_129_0002.nx
)
Note: this dataset is provided as a proof of concept
. Please don't be very 'attentive' to the slice reconstruction