key: cord-0030242-ce2a6qfn authors: Amin, Javaria; Anjum, Muhammad Almas; Sharif, Muhammad; Kadry, Seifedine; Nadeem, Ahmed; Ahmad, Sheikh F. title: Liver Tumor Localization Based on YOLOv3 and 3D-Semantic Segmentation Using Deep Neural Networks date: 2022-03-27 journal: Diagnostics (Basel) DOI: 10.3390/diagnostics12040823 sha: 49caa091eb2d619fb40d0f71bab46e5bef6b3941 doc_id: 30242 cord_uid: ce2a6qfn Worldwide, more than 1.5 million deaths are occur due to liver cancer every year. The use of computed tomography (CT) for early detection of liver cancer could save millions of lives per year. There is also an urgent need for a computerized method to interpret, detect and analyze CT scans reliably, easily, and correctly. However, precise segmentation of minute tumors is a difficult task because of variation in the shape, intensity, size, low contrast of the tumor, and the adjacent tissues of the liver. To address these concerns, a model comprised of three parts: synthetic image generation, localization, and segmentation, is proposed. An optimized generative adversarial network (GAN) is utilized for generation of synthetic images. The generated images are localized by using the improved localization model, in which deep features are extracted from pre-trained Resnet-50 models and fed into a YOLOv3 detector as an input. The proposed modified model localizes and classifies the minute liver tumor with 0.99 mean average precision (mAp). The third part is segmentation, in which pre-trained Inceptionresnetv2 employed as a base-Network of Deeplabv3 and subsequently is trained on fine-tuned parameters with annotated ground masks. The experiments reflect that the proposed approach has achieved greater than 95% accuracy in the testing phase and it is proven that, in comparison to the recently published work in this domain, this research has localized and segmented the liver and minute liver tumor with more accuracy. The main organ, situated behind the right ribs and beneath the base of the lung, is the liver, which helps in food digestion [1] . It is responsible for filtering of the blood cells, nutritional recovery, and storage [2] . The two major areas of the liver are the right and left lobes. The caudate & quadrate are further two types of lobes. The liver cells grow rapidly and may spread to other areas of the body, which is similar to the cause of hepatocellular carcinoma (HCC) [3] . Hepatic primary malignancies arise when the cells have irregular actions [4] . In 2008, 750,000 liver cancer patients were diagnosed, 696,000 of whom died because of it [5] . In 2021, 42,230 cases of liver tumor/cancer including were diagnosed, 12,340 women & 29,890 men, 30 ,230 of which died (9930 female and 20,300 male) [6] . Globally the prevalence of infection among males is approximately double that of females [7, 8] . Medical, imaging [9, 10] , and laboratory studies, such as MRI scans, and CT scans, detect primary liver malignancy [11] . To obtain accurate images from different angles such as the axial, coronal, and sagittal slices, a CT scan uses radiation [12] . Hepatic malignancy staging relies on the scale and the position of the malignancy. It is therefore necessary to establish an automated technique to accurately diagnose and identify the cancer area from the CT scan [13] [14] [15] [16] . In general, liver CT scans are interpreted through manual/semimanual procedures; however, these methods are subjective, costly, time-consuming, and extremely vulnerable to error [17] . Many computer-aided approaches have been developed to resolve these challenges and increase the efficiency of the diagnosis of liver tumors [52] . However, due to several problems, such as poor contrast between the liver and tumors, varying contrast ratios of tumors, difference in the number and size of tumors, tissue anomalies, and unusual tumor development in response to medical care, existing methods are not especially accurate in separating the liver and lesions [15, 53] . Currently, fully convolutional networks [54] [55] [56] [57] have provided great attention for liver segmentation as compared to traditional conventional approaches, [58] i.e., shape-based statistical methods [59, 60] random forest, Adaboost [61] , and graph cut [62] . Christ et al. utilized a cascaded FCN model segmentation [63] . The deep cascaded model has been widely utilized to segment the liver tumor. However, through the local field perspective and depth of the shallow network, FCN loses little spatial image information [64] . Thus, based on FCN, we employed a U-net, in which features are fused by the addition of four skip connections and up-sampled to the size of the input through utilizing deconvolutional layers [65] . The U-net model achieved maximum segmentation accuracy by increasing the depth of the network and receptive fields. The features of both the dense-net and the U-net models have been combined to explore the more informative features. The combination of feature information has reduced the computational cost [66] . Jin et al. presented an attention residual model for more useful input features extraction [67] . Ginneken et al. has employed an encoder-decoder model for precise liver tumor segmentation [68] . H. R. Roth et al. suggested a 3D-FCN model to extract detailed abdominal vessels and organ information [69] . Automated segmentation of the liver tumor is more challenging, owing to fuzzy borders among healthy and tumor tissues [19] . To handle these challenging tasks, extensive segmentation models have been developed. A two-stage model has been presented for liver tumor segmentation, including shape-based and auto-context learning [70] . However, these methods achieved minimum accuracy with computationally high cost [71, 72 ]. An LSM model has been utilized with an energy edge that performed better as compared to a conventional Chan-Vese and geodesic model. The fuzzy pixel-based classification approach has been utilized for segmentation [73] . Existing work does not accurately segment the minute liver tumor due to limited training data. Researchers still need large-scale datasets for precise detection of liver lesions [74] . Hence, in the proposed research, existing limitations and challenges have been addressed through an enhanced GAN model in order to produce synthetic images to increase the training images which help in segmentation of very minute liver tumors. For more accurate localization of the small, affected region, the modified YOLOv3-ResNet-50 model is proposed. DeepLabv3 is used as a core of the Inceptionresnetv2 model for more precise segmentation and localization. Major steps of the proposed approach are: The synthetic liver CT images are created with a modified GAN model and fed into the localization part of the model. After synthetic images generation, the YOLOv3-ResNet-50 model is designed for liver and liver tumor localization. In the last step, a modified 3D-semantic segmentation model is presented, where DeepLabv3 serves as the base network for the Inceptionresnetv2. The remaining article is structured as follows: the introduction is defined in Section 1, while Section 2 explains the suggested model and Section 3 discusses the findings and conclusions. In Phase I of the proposed research, Synthetic CT images are created using an improved GAN [75] model, and they are passed to improved pre-trained ResNet-50 [76] models as Diagnostics 2022, 12, 823 3 of 18 a base-network of the YOLOv3 [77] in phase II. The proposed localization model is finetuned with selected learning parameters for the accurately localized small size of the liver and the liver tumor. In Phase III, the convolutional neural network is used to conduct 3D segmentation, with Deeplabv3 [78] serving as the base network for the pre-trained Inceptionresnetv2 model. The proposed model is explained in Figure 1 . In Phase I of the proposed research, Synthetic CT images are created using an improved GAN [75] model, and they are passed to improved pre-trained ResNet-50 [76] models as a base-network of the YOLOv3 [77] in phase II. The proposed localization model is fine-tuned with selected learning parameters for the accurately localized small size of the liver and the liver tumor. In Phase III, the convolutional neural network is used to conduct 3D segmentation, with Deeplabv3 [78] serving as the base network for the pretrained Inceptionresnetv2 model. The proposed model is explained in Figure 1 . GAN is the deep neural network that generates input data that is approximately like the input slices. The model contains two networks including a generator and discriminator. The generator network takes a random vector value and generates a similar training image. However, the discriminator takes images in the form of batches that contain observations from trained data and generated data and classify the input images as real or synthetic. The proposed modified GAN [79] , in which the generator network contains 13 layers, including 01 input "01 project" reshape, 04 transposed convolution, 03 batch-norm & 03 ReLU, Tanh and the discriminator network is comprised of 13 layers, such as 01 input, 05 convolutions, 04 LeakyReLU and 03 batch-normalization, as presented in Figure 2 . GAN is the deep neural network that generates input data that is approximately like the input slices. The model contains two networks including a generator and discriminator. The generator network takes a random vector value and generates a similar training image. However, the discriminator takes images in the form of batches that contain observations from trained data and generated data and classify the input images as real or synthetic. The proposed modified GAN [79] , in which the generator network contains 13 layers, including 01 input "01 project" reshape, 04 transposed convolution, 03 batch-norm & 03 ReLU, Tanh and the discriminator network is comprised of 13 layers, such as 01 input, 05 convolutions, 04 LeakyReLU and 03 batch-normalization, as presented in Figure 2 . The loss function of the proposed GAN model is described as follows:  The taken discriminator output ρ: ρ σ(ρ) is probability belonging to input slices. Here σ represents the gradient sigmoid function. 1 ρ shows the probability of input slices  Generator loss = mean(log ρ ) Here ρ denotes discriminator output probability for synthetic images generation.  The discriminator probability is increased that accurately classifies the real input slices and synthetically generated slices.  Loss of Discriminator mean(log(ρ )) mean(log (1 ρ )) . Here ρ denotes the probability of the discriminator output for real input slices.  The generative score is the average of probabilities related to the discriminator output for synthetically generated images. Generator scores mean(ρ ).  The discriminative score is average of probabilities related to the discriminator output for synthetic and real images. Discriminator scores mean(ρ ) mean 1 ρ . The hyperparameters of the GAN model are depicted in Table 1 . The loss function of the proposed GAN model is described as follows: The taken discriminator output ρ:ρ = σ(ρ) is probability belonging to input slices. Here σ represents the gradient sigmoid function. 1 −ρ shows the probability of input slices I Generator loss = −mean(logρ generated ) Hereρ generated denotes discriminator output probability for synthetic images generation. I The discriminator probability is increased that accurately classifies the real input slices and synthetically generated slices. I Loss of Discriminator = −mean(log(ρ real )) − mean(log(1 −ρ synthetic generated )). Herê ρ real denotes the probability of the discriminator output for real input slices. I The generative score is the average of probabilities related to the discriminator output for synthetically generated images. Generator scores = mean(ρ generated ). I The discriminative score is average of probabilities related to the discriminator output for synthetic and real images. Discriminator scores = 1 2 mean(ρ real ) + 1 2 mean(1 −ρ generated ). The hyperparameters of the GAN model are depicted in Table 1 . Table 1 shows the GAN parameters where the size of the input images are 64 × 64 × 3 that returns prediction scalar scores based on the series of convolutional, batch-normalization, and leaky-ReLU layers. The probability of the 0.5 dropout is selected in the dropout layer and the 5 × 5 filter size is used in convolutional layers. The size of the stride is 2 and the scale of the leaky ReLU is 0.2. The mini-batch size is 128 for the 3000 epochs. The 0.0002 learning rate, 0.5 decay gradient factor and 0.999 gradient decay squared factor are selected for GAN model training. The 0.5 flip factor is set because the generator may fail in training if the discriminator learns to distinguish between actual and generated CT images too soon. Flip the labels of a percentage of the genuine photos at random that provide help to balance discriminator and generator's learning. For better tracking of smaller objects, YOLOv3 [50] expands YOLOv2 by incorporating detection at many scales. Therefore, an improved YOLOv3-RES is suggested for tumor localization. The model is comprised of two-stage learning models where the extracted features from ResNet-50 are transferred to YOLOv3. The model is comprised of 177 layers i.e., one input image layer size of 224 × 224 × 3, 29 convolutional (CONv), ReLU (27), max-pooling (3), depth-concatenation (08), 01 up-sample. The proposed YOLOv3-RES model contains 181 layers, where 57 CONv, 53 batch-norm, 51 ReLU, 01 maxpooling, 16 addition, upsample, and depth concatenation is depicted in Figure 3 . Table 1 shows the GAN parameters where the size of the input images are 64 × 64 × 3 that returns prediction scalar scores based on the series of convolutional, batch-normalization, and leaky-ReLU layers. The probability of the 0.5 dropout is selected in the dropout layer and the 5 × 5 filter size is used in convolutional layers. The size of the stride is 2 and the scale of the leaky ReLU is 0.2. The mini-batch size is 128 for the 3000 epochs. The 0.0002 learning rate, 0.5 decay gradient factor and 0.999 gradient decay squared factor are selected for GAN model training. The 0.5 flip factor is set because the generator may fail in training if the discriminator learns to distinguish between actual and generated CT images too soon. Flip the labels of a percentage of the genuine photos at random that provide help to balance discriminator and generator's learning. For better tracking of smaller objects, YOLOv3 [50] expands YOLOv2 by incorporating detection at many scales. Therefore, an improved YOLOv3-RES is suggested for tumor localization. The model is comprised of two-stage learning models where the extracted features from ResNet-50 are transferred to YOLOv3. The model is comprised of 177 layers i.e., one input image layer size of 224 × 224 × 3, 29 convolutional (CONv), ReLU (27), max-pooling (3), depth-concatenation (08), 01 up-sample. The proposed YOLOv3-RES model contains 181 layers, where 57 CONv, 53 batch-norm, 51 ReLU, 01 maxpooling, 16 addition, upsample, and depth concatenation is depicted in Figure 3 . In this step, the generated images obtained from the GAN model as well as original CT images are passed to the proposed localization model, where ResNet-50 is utilized for features extraction and two heads are added for detection at the end of the network. The size of the second detection head, activation-37, rectified linear units (ReLU) 7 × 7 × 2048, is twice as big as the first one of the detection activation-49-ReLU 28 × 28 × 1048, so it performed better for the localization of small objects. The anchor number is determined to be 7 in order to obtain a better tradeoff among anchors and IoU. In the modified features extraction and two heads are added for detection at the end of the network. The size of the second detection head, activation-37, rectified linear units (ReLU) 7 × 7 × 2048, is twice as big as the first one of the detection activation-49-ReLU 28 × 28 × 1048, so it performed better for the localization of small objects. The anchor number is determined to be 7 in order to obtain a better tradeoff among anchors and IoU. In the modified YOLOv3-RES network, extracted features from the ReLU-49-activation layer and further layers such as average global pooling, fully connected, softmax and classification output are removed, and eight layers are added, including convolution, ReLU, Conv2Detection1, upsampling, Depth concatenation, Conv1Detection2, ReLU, and Conv2Detection2. The proposed model localized the coordinates of the liver and liver tumor region in the CT images more precisely. The selection of the optimum learning rate is a challenging task because it directly affects the model performance. Therefore, this experiment is performed for the selection of the optimum learning rate as presented in Table 2 . Table 2 depicts the learning rate values with the corresponding error rate, in which we achieved an error rate of 0.2354 on 0.0001 learning rate, 0.1354 error rate on 0.001 learning rate, 0.1989 error rate on 0.002 learning rate, and 0.2014 error rate on 0.0005 learning rate. After the experimentation, we observed that a 0.001 learning rate provides less error rate as compared to other values, hence we used a 0.001 learning rate for further experimentation. Table 3 states the training parameters. Table 3 depicts the training parameters, where a batch size is of eight is selected to stabilize the training, and it also relies on the accessible memory. A learning rate of 0.001 is selected with 1000 wramp iterations that represent the total iterations in order to increase the rate of learning that exponentially relies on mathematical expression, and, as shown in Equation (1), it provides help in stabilizing the gradients. The factor of L2 regularization is set as 0.0005, and the penalty threshold is 0.5, in which detection < 0.5 overlaps with the ground mask. The semantic segmentation is proposed for liver cancer segmentation, where incep-tionresnetv2 is utilized as the base model of the deeplabv3 [45] . The Inceptionresnetv2 model contains 824 layers, including CONv (241), ReLU (245), batch-normalization (199), pooling (3), concatenate-depth (41), average global pooling (1), 39 scaling, and additional layers. The proposed semantic segmentation model consists of 853 layers, including 253 CONv, 208 batch-norm, 251 ReLU, Max pool 03, average pool (1), 44 depth-concate, 02 transpose CONv, 02 crop 2D, pixel classification, 49 scaling, addition (38) , and softmax. The best-fit parameters from Table 4 are used for training. Figure 4 depicts the proposed semantic segmentation paradigm. The semantic segmentation is proposed for liver cancer segmentation, where incep-tionresnetv2 is utilized as the base model of the deeplabv3 [45] . (38) , and softmax. The best-fit parameters from Table 4 are used for training. Figure 4 depicts the proposed semantic segmentation paradigm. The model is trained on an Sgdm optimizer with 8 batch-size and the proposed model is consistent on 100 epochs. Thus, these parameters are utilized for model building that provides significant improvement in liver tumor segmentation. The segmented proposed model results are shown with ground annotated masks in Figure 5 . The performance of the proposed method is simulated on the publicly available 3D IRCADb-01 dataset. 3D-IRCADb-01 data consists of the CT scans of 10 female and 1 males with tumors (hepatic) in approximately 75% of the cases. In this dataset, 20 folder are included that relate to 20 different patients. These folders are known as 3D-IRCADb 01-number, which contains four subfolders such as DICOM-patients, DICOM-labeled, DI COM-masks, and VTK-meshes [80] . In this research, 1353 training with 1353 masks & 135 testing with 1353 binary masks slices of liver and liver tumor are utilized for 3D-segmen tation. This research work is evaluated by performing three different experiments. Experi ment #1 is used to assess the performance of the improved GAN approach. The second experiment was done to compute the localization method performance. In the third ex periment, the segmentation model performance is computed. The proposed research work is implemented on a G5500 gaming laptop with a 2070 RTX 8GB Graphic card on Windows 10 operating system with a 32 TB SSD, and MATLAB RA-2020b. Experiment 1 is done to simulate the performance of the GAN model, in which syn thetic CT images are generated with the generator model and subsequently classified with the discriminator model. The improved GAN training model performance is graphically depicted in Figure 6 . The performance of the proposed method is simulated on the publicly available 3D-IRCADb-01 dataset. 3D-IRCADb-01 data consists of the CT scans of 10 female and 10 males with tumors (hepatic) in approximately 75% of the cases. In this dataset, 20 folders are included that relate to 20 different patients. These folders are known as 3D-IRCADb-01-number, which contains four subfolders such as DICOM-patients, DICOM-labeled, DICOM-masks, and VTK-meshes [80] . In this research, 1353 training with 1353 masks & 1353 testing with 1353 binary masks slices of liver and liver tumor are utilized for 3D-segmentation. This research work is evaluated by performing three different experiments. Experiment #1 is used to assess the performance of the improved GAN approach. The second experiment was done to compute the localization method performance. In the third experiment, the segmentation model performance is computed. The proposed research work is implemented on a G5500 gaming laptop with a 2070 RTX 8GB Graphic card on a Windows 10 operating system with a 32 TB SSD, and MATLAB RA-2020b. Experiment 1 is done to simulate the performance of the GAN model, in which synthetic CT images are generated with the generator model and subsequently classified with the discriminator model. The improved GAN training model performance is graphically depicted in Figure 6 . Table 5 illustrates the generator's and discriminator's prediction scores. The GAN model provides scores of 0.8092 discriminators for distinguishing between original and generated data. The generative score of 0.1354 denotes the average probabilities for the generated images. The results show that the GAN model produces synthetic images which are like the originals. GAN was used to generate the synthetic images depicted in Figure 7 . Table 5 illustrates the generator's and discriminator's prediction scores. The GAN model provides scores of 0.8092 discriminators for distinguishing between original and generated data. The generative score of 0.1354 denotes the average probabilities for the generated images. The results show that the GAN model produces synthetic images which are like the originals. GAN was used to generate the synthetic images depicted in Figure 7 . Table 5 illustrates the generator's and discriminator's prediction scores. The GAN model provides scores of 0.8092 discriminators for distinguishing between original and generated data. The generative score of 0.1354 denotes the average probabilities for the generated images. The results show that the GAN model produces synthetic images which are like the originals. GAN was used to generate the synthetic images depicted in Figure 7 . The synthetic images are transferred to the enhanced model of localization. In terms of iterations, the training efficiency of the proposed model concerning total loss and learning rate is shown graphically in Figure 8 . Figures 9 and 10 depict the suggested method's localization outcomes. Diagnostics 2022, 12, x FOR PEER REVIEW 10 of 18 The synthetic images are transferred to the enhanced model of localization. In terms of iterations, the training efficiency of the proposed model concerning total loss and learning rate is shown graphically in Figure 8 . Figures 9 and 10 depict the suggested method's localization outcomes. The synthetic images are transferred to the enhanced model of localization. In terms of iterations, the training efficiency of the proposed model concerning total loss and learning rate is shown graphically in Figure 8 . Figures 9 and 10 depict the suggested method's localization outcomes. Table 6 lists the obtained localization findings. The localization results demonstrate that the proposed method localized the very minute liver tumor more accurately. Table 6 shows that 0.97 mAp and 0.98 IoU was achieved on the liver and 0.96 mAp & 0.97 IoU was achieved on the liver tumor from the benchmark liver CT dataset. In this experiment, liver and liver tumor regions are segmented using an improved 3D-semantic segmentation model. At the pixel level, the model is trained using ground labelling. Figure 11 depicts the segmentation performance as a confusion matrix. Table 6 lists the obtained localization findings. The localization results demonstrate that the proposed method localized the very minute liver tumor more accurately. Table 6 shows that 0.97 mAp and 0.98 IoU was achieved on the liver and 0.96 mAp & 0.97 IoU was achieved on the liver tumor from the benchmark liver CT dataset. In this experiment, liver and liver tumor regions are segmented using an improved 3D-semantic segmentation model. At the pixel level, the model is trained using ground labelling. Figure 11 depicts the segmentation performance as a confusion matrix. As seen in Figures 12 and 13 , the proposed model more correctly splits the actual liver tumour lesions. As seen in Figures 12 and 13 , the proposed model more correctly splits the actual liver tumour lesions. As seen in Figures 12 and 13 , the proposed model more correctly splits the actual liver tumour lesions. Table 7 shows the 3D segmentation of the liver and hepatic tumor region. Table 7 shows the segmented liver region, in which the proposed method achieved 0.981 global, 0.972 mean, 0.99 IoU, 0.984 F1-score, 0.99 pPrecision, 0.98 rRecall, and 0.98 sSpecificity, respectively. On the segmented liver tumor region, the method achieved 0.991, 0.992, 0.99, 1.00, 0.98, 1.00, and 0.995 global, mean, IoU, Precision, Recall, Specificity, and F1-score respectively. The proposed method results in comparison to the published work so far is shown in Table 8 . Table 7 shows the 3D segmentation of the liver and hepatic tumor region. Table 7 shows the segmented liver region, in which the proposed method achieved 0.981 global, 0.972 mean, 0.99 IoU, 0.984 F1-score, 0.99 pPrecision, 0.98 rRecall, and 0.98 sSpecificity, respectively. On the segmented liver tumor region, the method achieved 0.991, 0.992, 0.99, 1.00, 0.98, 1.00, and 0.995 global, mean, IoU, Precision, Recall, Specificity, and F1-score respectively. The proposed method results in comparison to the published work so far is shown in Table 8 . Table 8 shows existing method results where the ResNet-50 model is used for liver and liver tumor segmentation on the 3D-IRCADb dataset, with 0.96 scores on liver identification and 0.82 scores on liver tumor identification [81] , whereas the encoder-and decoder-based semantic segmentation model is employed for liver segmentation, and achieved a score of 0.95; however, the scores were 64.3% ± 34.6% on hepatic tumor [82] . Similarly, the residual U-network is employed for liver and liver tumors segmentation, and this method achieved scores of 0.96 and 0.83,respectively [67] . The U-net model has been utilized for liver analysis, with a score of 0.56 [83] . The region adaptive growing is utilized to segment the liver tumor, and this method provides a score of 0.85 [85] . The geometrical, shape, and texture features are utilized for liver tumor segmentation, and results in a score of 0.87 [86] . The deep attention model provides 0.85 segmentation scores of the tumor region [84] . A multiscale residual dilated U-network (MRDU) is utilized for liver and liver tumors segmentation and provides dice scores of 96.0 and 76.3, respectively [84] . A U-shaped network is employed for liver tumor segmentation and this method provides dice scores of 0.84 [87] . In the existing work, we observed that no method provides improved results for liver and liver tumors segmentation. In the existing methodologies, when the liver segmentation scores are increased, then the liver tumor segmentation scores decrease [84] . In the proposed research, inceptionresnetv2 is utilized as the core model of the DeepLabv3 model, and the proposed model provides 0.99 scores for liver tumor and 0.98 on liver segmentation, which is higher when compared to existing methods. The comparison of results reflects that the proposed 3D-localization and segmentation model provides significantly better performance as compared to existing works on the same benchmark dataset. Segmentation is a tough operation due to the variable size and shape of liver tumors. As a result, a novel framework for liver detection was developed in this. The number of the input slices is increased by the GAN model. A combination of ResNET-50 and the YOLOv3 detector model more precisely localized the small liver tumor. The model achieves 0.97 mAP on the liver and 0.96 mAp on the localized liver tumor. After localization, a 3D-semantic segmentation approach is proposed for the segmentation of the contaminated areas. The improved segmentation model segments the liver/liver tumor pixels more accurately. When compared to recently published work in this sector, the segmented regions attained 0.99 global accuracy. In vivo mechanical characterization of human liver Emerging Risks: A Strategic Management Guide Total right hepatic lobectomy: Report of a case The Production of Malignant Primary Hepatic Tumours in the Rat by Feeding Dimethylnitrosamine Liver Tumor Segmentation in CT Scans Using Modified SegNet Key Statistics about Liver Cancer Global cancer statistics The global burden of viral hepatitis from 1990 to 2013: Findings from the Global Burden of Disease Study Gastric polyp detection in gastroscopic images using deep neural network A distinctive approach in brain tumor detection and classification using MRI Detection and characterization of focal liver lesions: A Japanese phase III, multicenter comparison between gadoxetic acid disodium-enhanced magnetic resonance imaging and contrast-enhanced computed tomography predominantly in patients with hepatocellular carcinoma and chronic liver disease Accuracy of cone-beam computed tomography imaging of the temporomandibular joint: Comparisons with panoramic radiology and linear tomography Position paper of the Italian Association for the Study of the Liver (AISF): The multidisciplinary clinical approach to hepatocellular carcinoma Computer-aided diagnosis in medical imaging: Historical review, current status and future potential Review of liver segmentation and computer assisted detection/diagnosis methods in computed tomography Review on the Methods of Automatic Liver Segmentation from Abdominal Images A Comparative Evaluation for Liver Segmentation from Spir Images and a Novel Level Set Method Using Signed Pressure Force Function A method for the detection and classification of diabetic retinopathy using structural predictors of bright lesions An improved framework for brain tumor analysis using MRI based on YOLOv2 and convolutional neural network Brain tumor detection using fusion of hand crafted and deep learning features Brain tumor detection using statistical and machine learning method Detection of Brain Tumor based on Features Fusion and Machine Learning A Review on Recent Developments for Detection of Diabetic Retinopathy Brain tumor classification based on DWT fusion of MRI sequences using convolutional neural network An integrated design of particle swarm optimization (PSO) with fusion of features for detection of brain tumor A New Approach for Brain Tumor Segmentation and Classification Based on Score Level Fusion Using Transfer Learning Brain tumor detection: A long short-term memory (LSTM)-based learning model Brain tumor classification: Feature fusion Use of machine intelligence to conduct analysis of human brain data for detection of abnormalities in its cognitive functions Integrated design of deep features fusion for localization and classification of skin cancer Brain Tumor Detection by Using Stacked Autoencoders in Deep Learning Brain tumor detection based on extreme learning Diabetic retinopathy detection and classification using hybrid feature set Convolutional neural network with batch normalization for glioma and stroke lesion detection using MRI A unified patch based method for brain tumor detection using features fusion Recognition of Different Types of Leukocytes Using YOLOv2 and Optimized Bag-of-Features Deep Semantic Segmentation and Multi-Class Skin Lesion Classification Based on Convolutional Neural Network Efficient hybrid approach to segment and classify exudates for DR prediction An Integrated Design for Classification and Localization of Diabetic Foot Ulcer Based on CNN and YOLOv2-DFU Models Segmentation and classification of lung cancer: A review Diagnosis of COVID-19 Infection Using Three-Dimensional Semantic Segmentation and Classification of Computed Tomography Images 3D-semantic segmentation and classification of stomach infections using uncertainty aware deep neural networks An intelligence design for detection and classification of COVID19 using fusion of classical and convolutional neural network and improved microscopic features selection approach 3D Semantic Deep Learning Networks for Leukemia Detection Convolutional Bi-LSTM Based Human Gait Recognition Using Video Sequences Microscopic segmentation and classification of COVID -19 infection with ensemble convolutional neural network A deep network designed for segmentation and classification of leukemia using fusion of the transfer learning models An integrated framework for COVID-19 classification based on classical and quantum transfer learning from a chest radiograph Fruits and Vegetable Diseases Recognition Using Convolutional Neural Networks De novo design of potent and resilient hACE2 decoys to neutralize SARS-CoV-2 Computer-Aided Diagnosis Systems for Lung Cancer: Challenges and Methodologies Automatic liver segmentation in MRI images using an iterative watershed algorithm and artificial neural network Adaptive Attention Convolutional Neural Network for Liver Tumor Segmentation Deep learning with convolutional neural networks for identification of liver masses and hepatocellular carcinoma: A systematic review Liver tissue segmentation in multiphase CT scans using cascaded convolutional neural networks Robust engineering-based unified biomedical imaging framework for liver tumor segmentation Fully Convolutional Network for Liver Segmentation and Lesions Detection A conditional statistical shape model with integrated error estimation of the conditions: Application to liver segmentation in non-contrast CT images Survey on liver tumour resection planning system: Steps, techniques, and parameters Automatic liver segmentation using multiple prior knowledge models and free-form deformation Automatic Liver Segmentation on Volumetric CT Images Using Supervoxel-Based Graph Cuts Automatic liver and lesion segmentation in CT using cascaded fully convolutional neural networks and 3D conditional random fields Detection-aided liver lesion segmentation using deep learning Image-based textile decoding Hybrid densely connected UNet for liver and tumor segmentation from CT volumes A Hybrid Deep Attention-Aware Network to Extract Liver and Tumor in CT Scans Automatic liver tumor segmentation in CT with fully convolutional neural networks and object-based postprocessing An application of cascaded 3D fully convolutional networks for medical image segmentation Abdominal multi-organ segmentation with cascaded convolutional and adversarial deep networks Semi-Automated Segmentation of Single and Multiple Tumors in Liver CT Images Using Entropy-Based Fuzzy Region Growing A hybrid semi-automatic method for liver segmentation based on level-set methods using multiple seed points Integrating machine learning with region-based active contour models in medical image segmentation Liver tumor segmentation from computed tomography images using multiscale residual dilated encoder-decoder network Unsupervised representation learning with deep convolutional generative adversarial networks Deep residual learning for image recognition Encoder-decoder with atrous separable convolution for semantic image segmentation Quantum Machine Learning Architecture for COVID-19 Classification Based on Synthetic Data Generation Using Conditional Adversarial Neural Network IRCAD: Institut de Recherche Contre les Cancers de L'appareil Digestif EITS E 2 Net: An edge enhanced network for accurate liver and tumor segmentation on CT scans Cascaded deep convolutional encoder-decoder neural networks for efficient liver tumor segmentation Bottleneck feature supervised U-Net for pixel-wise liver and tumor segmentation A deep attention network via high-resolution representation for liver and liver tumor segmentation Semi-automatic liver tumor segmentation with adaptive region growing and graph cuts Optimized segmentation and classification for liver tumor segmentation and classification using opposition-based spotted hyena optimization SAA-Net: U-shaped network with Scale-Axis-Attention for liver tumor segmentation The authors appreciate the support of the Researchers Supporting Project number (RSP-2021/124), King Saud University, Riyadh, Saudi Arabia. The authors declare that they have no conflict of interest.