key: cord-0058343-knrjk0v9 authors: Fourcade, Constance; Rubeaux, Mathieu; Mateus, Diana title: Using Elastix to Register Inhale/Exhale Intrasubject Thorax CT: A Unsupervised Baseline to the Task 2 of the Learn2Reg Challenge date: 2021-02-23 journal: Segmentation, Classification, and Registration of Multi-modality Medical Imaging Data DOI: 10.1007/978-3-030-71827-5_13 sha: 244be84ab4291aa7b44f945c5550531ac97dcecc doc_id: 58343 cord_uid: knrjk0v9 As part of MICCAI 2020, the Learn2Reg registration challenge was proposed as a benchmark to allow registration algorithms comparison. The task 2 of this challenge consists in intrasubject 3D HRCT inhale/exhale thorax images registration. In this context, we propose a classical iterative-based registration approach based on Elastix toolbox, optimizing normalized cross-correlation metric regularized by a bending energy penalty term. This conventional registration approach, as opposed to novel deep learning techniques, reached visually interesting results, with a target registration error of 6.55 ± 2.69 mm and a Log-Jacobian standard deviation of 0.07 ± 0.03. The code is publicly available at: https://github.com/fconstance/Learn2Reg_Task2_SimpleElastix. Learn2Reg MICCAI 2020 satellite event is a registration challenge [3] consisting of four different tasks covering a wide range of medical image registration topics: multi-modality, noisy annotations, small datasets and large deformations. We concentrated on task 2, which consists in intrasubject inhale/exhale lung CT scans registration. Many deep learning-based registration methods have been developed recently [1, 2, 9] to reduce computational time and obtain more accurate deformations. However, according to [4] , conventional registration methods still reach better accuracy on inhale/exhale lung CT [8] than learning-based ones. Hence, we present results obtained using an iconic-based registration method build from Elastix toolbox [6] . Since the available training data volume is relatively small, we believe a method not requiring prior training would be efficient. In this way, we also provide to the community a well optimised baseline for comparison. The paper is organized as follows: we present in the next section the approach we developed as well as the dataset and the evaluation metrics provided by challenge organizers, then we expose the results, before discussing them in the conclusion section. The dataset of the task 2 of Learn2Reg challenge consists of 60 monocentric thorax CT images from 30 subjects [5] . A pair of images (inhale and exhale scans) is available per subject (see Fig. 1 ). A segmentation mask of the lungs is also available for each volume. Images size is 192 × 192 × 208 and spacing is 1.75 × 1.25 × 1.75 mm. Even if the dataset was split by challenge organizers into 17 training, 3 validation and 10 testing pairs, we treat every image independently, since we do not use training. The challenge objective of the second task is to register the inhale -movingimage to the exhale -fixed -one. Preprocessing. Images provided by challenge organizers were already preprocessed to the same spatial dimension and voxel resolution. Moreover, they were affinely pre-registered. As visible on Fig. 1 , the field-of-view (FOV) of exhale images is reduced compared to the one of inhale images. To reduce registration unrealistic deformations, we decided to align the FOV of both fixed and moving images. Thus, for each exhale image slice where the body of the subject in not visible, to set values of the corresponding inhale slice to 0. Since we modify voxel values only for null slices, some small FOV misalignment can persist in the image borders (see 3 rd column, Fig. 2 ). Registration. Our iterative registration method uses a B-spline transformation with four resolutions, each dividing the previous image size by two. Registration resolutions were optimized minimizing Eq. 1 for 1000 iterations, using an adaptive stochastic gradient descent optimizer [6] . These hyperparameter choices did not need special tuning and are quite common. Indeed, after 1000 iterations the optimizer converged, and with four resolutions, main image features are still visible on the coarsest level. Since both fixed and moving images, I f ixed and I moving , are from the same modality, C similarity corresponds to the normalized cross-correlation metric. Moreover, to ensure smooth and realistic-looking deformations, the similarity metric is regularized by C smooth , a bending energy penalty term [7] . It corresponds to the second derivative of the transformation T. In our experiments, several values of λ were tested (0, 0.1, 1, 10). We choose λ = 1, to balance registration accuracy and smoothness. Two evaluation metrics were used for the challenge. The target registration error (TRE) evaluates registration precision, while the standard deviation of the logarithm of the Jacobian of the deformation field (SDLogJ) quantifies registration smoothness. The TRE is computed on 100 landmarks, automatically set on fixed images, while correspondences in moving images are manually annotated. The landmarks are on lungs salient features, like vessels or nodules. For both metrics, a lower value indicates a better registration. With the proposed Elastix method, we obtain a TRE of 6.55 ± 2.69 mm and a SDLogJ of 0.07 ± 0.03 on the test dataset. More details subject-wise are visible Table 1 . For reference, the initial error between fixed and moving images (after affine registration only) was 10.24 ± 2.72 mm. Table 2 shows these results, along with the ones of the participating deep learning-based methods. Our low value of SDlogJ confirms that the proposed method creates smooth deformations and realistic-looking images. However, the high TRE value reflects a lack of precise registration inside the lungs, as illustrated 6 th row of Fig. 2 . In this paper, we propose a conventional registration method for the Learn2Reg challenge task 2, based on a normalized cross-correlation and a bending energy regularization term. Both fixed and moving images of the dataset of the task 2 did not present the same FOV. Hence, we cropped images slices to obtain more realistic registration results. Without performing this prior FOV alignment, our method performs poorly, as illustrated on image borders by red boxes, last row of Fig. 2 . Even if our approach should be adapted from a dataset to another, it seems important to align the FOV of fixed and moving images to reach better registration results. With regards to the learning based methods, our computation time is longer, yet we do not require time-expensive prior training. Also, our results reach similar smoothness, but slightly lower accuracy over the testing dataset. Regarding the use of hyperparameters, both conventional and learning-based methods should find appropriate values for the number of resolutions/network depth, the number of iterations/number of epochs, the regularization weight, etc. Overall, our approach provides reasonable registration results, although inside the lungs the TRE remains high. This could be improved in a second For three validation subjects (resp. each column), exhale image, inhale image, overlay of exhale and inhale image, overlay of exhale and inhale images with the deformation field from inhale to inhale in green and overlay of exhale and registered inhale images (1 st , 2 nd , 3 rd , 4 th and 5 th row resp.). The 6 th row zooms inside white squares of the 5 th row images. In this last row, body parts visually seem accurately registered (blue circles -gray-scale colors), but registration approximations are visible within the inner lung regions (orange circles -pink-green colors). Larger registration errors due to misaligned initial fields of view are highlighted in the bottom right image by red boxes. Best viewed in color. (Color figure online) step, masking the body and focusing the registration only on the lungs. We provide all the implementation details, hyperparameters, and code, such that the proposed method could serve as a non-learning based baseline for comparison. VoxelMorph: a learning framework for deformable medical image registration Introduction to medical image registration with DeepReg, between old and new Learn2Reg Challenge Highly accurate and memory efficient unsupervised learning-based discrete CT registration using 2.5D displacement search Lean2Reg Challenge: CT Lung Registration -Training Data elastix: a toolbox for intensity-based medical image registration Nonrigid registration using free-form deformations: application to breast MR images Estimation of large motion in lung CT by integrating regularized keypoint correspondences into dense deformable registration A deep learning framework for unsupervised affine and deformable image registration