key: cord-0475554-nvi0gfca authors: Jiang, Yifan; Chen, Han; Loew, Murray; Ko, Hanseok title: COVID-19 CT Image Synthesis with a Conditional Generative Adversarial Network date: 2020-07-29 journal: nan DOI: nan sha: 32095de948f289ba4bc40c730a5f2a90a5cb09b7 doc_id: 475554 cord_uid: nvi0gfca Coronavirus disease 2019 (COVID-19) is an ongoing global pandemic that has spread rapidly since December 2019. Real-time reverse transcription polymerase chain reaction (rRT-PCR) and chest computed tomography (CT) imaging both play an important role in COVID-19 diagnosis. Chest CT imaging offers the benefits of quick reporting, a low cost, and high sensitivity for the detection of pulmonary infection. Recently, deep-learning-based computer vision methods have demonstrated great promise for use in medical imaging applications, including X-rays, magnetic resonance imaging, and CT imaging. However, training a deep-learning model requires large volumes of data, and medical staff faces a high risk when collecting COVID-19 CT data due to the high infectivity of the disease. Another issue is the lack of experts available for data labeling. In order to meet the data requirements for COVID-19 CT imaging, we propose a CT image synthesis approach based on a conditional generative adversarial network that can effectively generate high-quality and realistic COVID-19 CT images for use in deep-learning-based medical imaging tasks. Experimental results show that the proposed method outperforms other state-of-the-art image synthesis methods with the generated COVID-19 CT images and indicates promising for various machine learning applications including semantic segmentation and classification. C ORONAVIRUS disease 2019 (COVID-19) [1] , which was first identified in Wuhan, China, in December 2019, was declared a pandemic in March 2020 by the World Health Organization (WHO). As of 21 July, there had been more than 14 million confirmed cases and 609,198 deaths across 188 countries and territories [2] . COVID-19 is the result of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), and its most common symptoms include fever, dry cough, a loss of appetite, and fatigue, with common complications including pneumonia, liver injury, and septic shock [3] , [4] . There are two main diagnostic approaches for COVID-19: rRT-PCR and chest computed tomography (CT) imaging [4] . In rRT-PCR, an RNA template is first converted by reverse transcriptase into complementary DNA (cDNA), which is then used as a template for exponential amplification using polymerase chain reaction (PCR). However, the sensitivity of rRT-PCR is relative low for COVID-19 testing [5] , [6] . As an alternative, chest CT scans can be used to take tomographic images from the chest area at different angles with postcomputed X-ray measurements. This approach has a higher sensitivity to COVID-19 and is less resource-intensive than traditional rRT-PCR [5] , [6] . Over time, artificial intelligence (AI) has come to play an important role in medical imaging tasks, including CT imaging [7] , [8] , magnetic resonance imaging (MRI) [9] and X-ray imaging [10] . Deep learning is a particularly powerful AI approach that has been successfully employed in a wide range of medical imaging tasks due to the massive volumes of data that are now available. These large datasets allow deep-learning networks to be well-trained, extending their generalizability for use in various applications. However, the collection of COVID-19 data for use in deep-learning models is far more difficult than normal data collection. Because COVID-19 is highly contagious [4] , medical staff require fulllength protection for CT scans, and the CT scanner and other equipment need to be carefully disinfected after an operation. In addition, certain tasks, such as CT image segmentation, require well-labeled data, which is labor-intensive. These problems mean that the COVID-19 CT data collection process can be difficult and time-consuming. In order to speed up the COVID-19 CT data collection process for deep-learning-based CT imaging and to protect medical personnel from possible infection when coming into contact with COVID-19 patients, we propose a novel image synthesis method based on a conditional generative adversarial network (cGAN) for deep-learning chest CT imaging. The fundamental principle of the proposed method is to employ CT images and corresponding well-labeled semantic segmentation maps to train a cGAN model. Figure 1 presents example chest CT images from several COVID-19 patients. The proposed model takes lung segmentation maps as input and employs them to generate synthesized CT images during the training stage. In the testing stage, we prepare a data augmented segmentation map and then use the pre-trained model to generate realistic synthesized lung CT images. The main contributions of the proposed method are as follows: (1) A safe COVID-19 chest CT data collection method based on image synthesis is presented. It is designed to significantly reduce the infection risk and the workload of medical staff. To the best of our knowledge, the proposed method represents the first use of image synthesis technology for COVID-19 chest CT imaging. patients. The first column shows CT images of the entire chest, the second column contains CT images of the lungs only, and the third column shows the corresponding segmentation map, with the lung region colored red, ground-glass opacity colored blue, and areas of consolidation colored green. (3) The proposed method outperforms other state-of-the-art image synthesizers in several image-quality metrics and demonstrates its potential for use in image synthesis for computer vision tasks such as semantic segmentation for COVID-19 chest CT imaging. (4) Benefiting from the novel restoration performance and flexible segmentation mapping, the proposed approach holds significant promise for use in practical applications, such as the rapid diagnosis of COVID-19 or other diseases, using a simple process that starts with a data augmented segmentation map of the target image in order to reconstruct medical simulation images. Generative adversarial networks. Generative adversarial networks (GANs) were first reported in 2014 [11] , and they have since been widely applied to many practical applications, including image synthesis [12] , [13] , [14] , [15] , image enhancement [16] , [17] , human pose estimation [18] , [19] , and video generation [20] , [21] . A GAN structure generally consists of a generator and a discriminator, where the goal of the generator is to fool the discriminator by generating a synthetic sample that cannot be distinguished from real samples. A common GAN extension is the conditional generative adversarial network (cGAN) [22] , which generates images that are conditional on class labels. cGAN always produces more realistic results than traditional GANs due to the extra information from these conditional labels. Conditional image-to-image translation. Conditional imageto-image translation (synthesis) methods can be mainly divided into three categories by the input conditions. Class-conditional methods take class-wise labels as input to synthesize image [22] , [23] , [24] , [25] . More recently, some text-conditional methods have been introduced [26] , [27] . The conditional GANs based methods [12] , [13] , [14] , [15] , [28] , [29] , [30] , [31] , [27] , [26] , [32] have been widely used on various image-to-image translation methods, for instance unsupervised image-to-image translation [30] , high quality image-to-image translation [13] , multi-modal image-to-image translation [28] , [14] , [15] , semantic layout conditional image-to-image translation. [12] , [13] , [14] , [15] . In the case of semantic layout conditional methods, the main idea is to synthesize realistic images under the navigation of semantic layout, so that they are easier to control particular part in the image. Conditional image-to-image translation. Conditional imageto-image translation methods can be divided into three categories based on the input conditions. Class-conditional methods take class-wise labels as input to synthesize images [22] , [23] , [24] , [25] while, more recently, text-conditional methods have been introduced [26] , [27] . cGAN-based methods [12] , [13] , [14] , [15] , [28] , [29] , [30] , [31] , [27] , [26] , [32] have been widely used for various image-to-image translation methods, including unsupervised [30] , high-quality [13] , multi-modal [28] , [14] , [15] , and semantic layout conditional image-toimage translation [12] , [13] , [14] , [15] . In semantic layout conditional methods, realistic images are synthesized under the navigation of the semantic layout, meaning that it is easier to control a particular region of the image. AI-based diagnosis using COVID-19 CT imaging. Since the outbreak of COVID-19, many researchers have turned to CT imaging technology in order to diagnose and investigate this disease. COVID-19 diagnosis methods based on chest CT imaging have been introduced in order to improve test efficiency [33] , [34] , [35] , [36] . Rather than using CT imaging for rapid COVID-19 diagnosis, semantic segmentation approaches have been employed to clearly label the focus position in order to make it easier for medical personnel to identify infected regions in a CT image [37] , [38] , [39] , [40] , [41] . As an alternative to working at the pixel-level, highlevel classification or detection approaches have been proposed [42] , [43] , [44] , which can allow medical imaging experts to rapidly locate areas of infection, thus speeding up the diagnosis process. Though two CT image synthesis methods have been previously reported [45] , [46] , they did not focus on COVID-19 or lung CT imaging; to the best of our knowledge, our proposed method is the first designed specifically for COVID-19 CT image synthesis. III. COVID-19 CT IMAGE SYNTHESIS WITH A CONDITIONAL GENERATIVE ADVERSARIAL NETWORK In this paper, we propose a cGAN-based COVID-19 CT image synthesis method. Here, COVID-19 CT image synthesis is formulated as a semantic-layout-conditional image-toimage translation task. The proposed method is inspired by Pix2pixHD [13] , with the structure consisting of two main components: a global-local generator and a multi-resolution discriminator. During the training stage, the semantic segmentation map of a corresponding CT image is passed to Fig. 2 . Overview of the proposed method. The upper section containing global-local generator blocks and multi-resolution discriminator blocks represents the training process, while the lower right section shows the testing process. Within the global-local generator blocks, two types of generator are present: a global information generator and local detail generator. The multi-resolution discriminator is depicted in gray. The synthesized images are transferred from the generator to the discriminator, and this process is shown as the dashed arrow. The yellow arrow shows the completion step for the process in which the non-lung region for the synthesized lung image is added. the global-local generator, where the label information from the segmentation map is extracted via down-sampling and rerendered to generate a synthesized image via up-sampling. The segmentation map is then concatenated with the corresponding CT image or synthesized CT image to form the input for the multi-resolution discriminator, which is used to distinguish the input as either real or synthesized. The decisions from the discriminator are used to calculate the loss and update the parameters for both the generator and discriminator. During the testing stage, only the generator is involved. A data augmented segmentation map is used as input for the generator, from which a realistic synthesized image can be obtained after extraction and re-rendering. This synthesized lung CT image is then combined with the non-lung area to form a completely synthesized CT image as the final result. Figure 2 presents an overview of the proposed method. The global-local generator G here mainly consists of two parts: global information generator G 1 and local details gen-erator G 2 . These two generators work together in a coarse-tofine way, where G 1 takes charge of learning and re-rendering global information, the global information always contains high-level knowledge (semantic segmentation labels, image structure information). While G 2 does details enhancement task in order to generate detailed information (image texture, tiny structures). The global-local generator G has two sub-components: global-information generator G 1 and local-detail generator G 2 . These generators work together by moving in a coarse-tofine direction. G 1 takes charge of learning and re-rendering global information, which always contains high-level knowledge (e.g., semantic segmentation labels and image structure information). G 2 is then used for detail enhancement (e.g., image texture and fine structures). We train the global-local generator using a three-step process: PatchGAN [47] . As shown in Figure 3 , G 1 takes a halfresolution (256 × 256) segmentation map as input, which is then sent for down-sampling to reduce the feature dimensions to 32 × 32. Nine residual blocks that maintain the dimensions at 32 × 32 are used to reduce the computational complexity and generate a large reception field [47] . Finally, the features are up-sampled and reconstructed back into a half-resolution (256 × 256) synthesized image. 2) Individual training for the local details generator: The structure of the local detail generator G 2 , which is similar to the structure of G 1 , is shown in Figure 4 . Rather than taking a low-resolution segmentation map as input, the local detail generator begins the synthesis process with a full-resolution segmentation map (512×512) and maintains this size throughout. That allows the local detail generator to fully learn the fine texture and structure and focus on low-level information within the input image. G 2 has a similar encoding-decoding training procedure as G1, though the output synthesized image is 512 × 512. 3) Joint training for the global-local generator: After training G 1 and G 2 separately, a joint training process is conducted. This is shown in the global-local generator region of Figure 2 . In the joint training stage, both G 1 and G 2 take the same input but with different resolutions (half-and full-resolution, respectively). The two networks run a forward process that differs from the individual training stage in which the up-sampling process in G 2 takes the element-wise sum from the output feature maps from the up-sampling process in G 1 and the output feature maps from down-sampling in G 2 , meaning that G 2 receives both global and local information to reconstruct the output. This training strategy enables the global-local generator G to effectively learn both global information and local details while also stabilizing the training process by simplifying it into three relatively simple procedures. A multi-resolution discriminator D is proposed in this paper. D consists of two sub-components: the full-resolution discriminator D 1 and the half-resolution discriminator D 2 . The design of these discriminators follows SPADE [14] ; rather than making a decision for the whole image, we utilize a patchwise discriminator. This means that the proposed discriminator can perceive both the global information and the details of the image. As shown in Figure 5 , we first down-sample the segmentation map, synthesized image, and real image into half-resolution form, then the synthesized and real images are randomly chosen to be concatenated with the segmentation map to form two inputs (full-and half-resolution) for D. Before sending the input to the discriminator, it should be randomly sampled as a series of 70×70 patches. After passing these patches through the discriminator, two decision matrices are obtained, which represent the patch-wise decisions for the two inputs. The patch-wise sampling method enables the multiresolution discriminator D to effectively learn local details, which can significantly improve the quality of the synthesized image. By assigning global and local discrimination to individual discriminators D1 and D2, the global structure can be maintained while also enhancing the details of the synthesized images. The overall learning objective of proposed approach can be represented by equation (1): (1) There are two main loss terms in the overall learning objective function (1): the loss for the cGAN L cGAN and the loss for feature matching L F M . The variable x is the real input image and m is the corresponding segmentation map. G represents global-local generator while D i represents the full-resolution discriminator D 1 or half-resolution discriminator D 2 . G(m) denotes the synthesized image produced by generator G with input segmentation map s, D i (m, x) and D i (m, G(m)) are the patch-wise decisions made by multi-resolution discriminator D with the real image or synthesized image as input, respectively. λ is the weight factor of feature matching loss term. We designed the cGAN loss function based on pix2pix [12] , as shown in (2) (2) This loss term allows cGAN to generate a realistic synthesized image that can fool discriminator under the condition of the input segmentation map. In order to train multi-resolution discriminator D, feature matching loss is employed (Eq. (3) ), which is inspired by ref [14] . where i represents the i th layer of D and N i is the total number of elements in the i th layer. This loss manages the differences of intermediate features from both full-and halfresolution discriminators in order to stabilize the training process and allow D 1 and D 2 to synchronously learn the details from the inputs with different resolutions. Rather than using both global-local generator G and multiresolution discriminator D as in the training stage, we only utilize the pre-trained G in the testing process. The input for G in this stage is a data augmented segmentation map that can be obtained using standard image editing software. After passing it through G, a synthesized CT image of the lung area is generated. The final step in the process combines the synthesized lung image with the corresponding non-lung area from the real image to produce a complete synthesized image. A. Experimental settings Dataset. In order to evaluate the proposed method and compare its performance to other state-of-the-art methods, we use 829 lung CT slices from nine COVID-19 patients, which were made public on 13 April 2020, by Radiopaedia [48] . This dataset includes the original CT images, lung masks, and COVID-19 infection masks. The infection masks contain ground-glass opacity and consolidation labels, which are the two most common characteristics used for COVID-19 diagnosis in lung CT imaging [49] . In this experiment, we select 373 slices that contained clear areas of infection. We divide the selected dataset into training and test sets consisting of 300 and 73 images, respectively. To fully train the deep-learningbased model, data augmentation pre-processing is applied. The 300 original images from the training set are augmented to produce 12,000 images, while the 73 images from the test set are augmented to produce 10,220 images (Table I) . The data augmentation methods include random resizing and cropping, random rotation, Gaussian noise, and elastic transform. Evaluation metrics. To accurately assess model performance, we utilize both image quality metrics and medical imaging semantic segmentation metrics: Four image quality metrics are considered in this study: Frchet inception distance (FID) [50] , peak-signal-to-noise ratio (PSNR) [51] , structural similarity index measure (SSIM) [51] , and root mean square error (RMSE) [15] . FID measures the similarity of the distributions of real and synthesized images using a deep-learning model. PSNR and SSIM are the most widely used metrics when evaluating the performance of image restoration and reconstruction methods. The former represents the ratio between the maximum possible intensity of a signal and the intensity of corrupting noise, while the latter reflects the structural similarity between two images. Three semantic segmentation metrics for medical imaging are used in this experiment: the dice score (Dice), sensitivity (Sen), and specificity (Spec) [52] , [53] . The dice score evaluates the area of overlap between a prediction and the ground truth, while sensitivity and specificity are two statistical metrics for the performance of binary medical image segmentation tasks. The former measures the percentage of actual positive pixels that are correctly predicted to be positive, while the latter measures the proportion of actual negative pixels that are correctly predicted to be negative. These three metrics are employed for semantic segmentation based on the assumption that, if the quality of the synthesized images is high enough, excellent segmentation performance can be achieved when using the synthesized images as input. Implementation details. We transform all of the CT slices into gray-scale images on a Hounsfield unit (HU) scale [-600,1500]. The sizes of the images and segmentation maps are then rescaled from 630 × 630 to 512 × 512. All of the image synthesis methods are trained with 20 epochs, with a learning rate that is maintained at 0.0002 for the first 10 epochs before linearly decaying to zero over the following ten epochs. Global-local generator G and multi-resolution discriminator D are trained using an Adam optimizer with parameters β 1 = 0.5 and β 2 = 0.999. The feature matching loss weight λ is set at 10. The batch size used to train the proposed method is 16. All of the experiments are run in an Ubuntu 18.04 environment using an Intel i7 9700k CPU and two GeForce RTX Titan graphics cards (48 GB VRAM). The performance of the proposed method was assessed according to both image quality and medical imaging semantic segmentation. 1) Image quality evaluation: In this study, common image quality metrics are employed to assess the synthesis performance of the proposed method and four other state-ofthe-art image synthesis methods: SEAN [15] , SPADE [14] , Pix2pixHD [13] , and Pix2pix [12] . We evaluate image quality for two synthetic image categories: complete and lung-only images. The complete images are those CT images generated by merging a synthesized lung CT image with its corresponding non-lung CT image. The evaluation results are presented in Table II . The proposed method outperforms other state-of-the-art methods based on the four image quality metrics for both the complete and lung-only images. Due to the design of the global-local generator and multi-resolution discriminator, the proposed model can generate realistic lung CT images for COVID-19 with a complete global structure and fine local details and maintain a relatively high signal-to-noise ratio. Thus, the proposed method can achieve state-of-the-art image synthesis results based on image quality. 2) Medical imaging semantic segmentation evaluation: To evaluate the reconstruction capability of the proposed method, we utilize Unet, a common medical imaging semantic segmentation approach [54] . We first train the Unet model on a mix of synthetic and real CT images. This evaluation consists of two independent experiments: (1) keeping the total number of images the same while replacing the real data with synthesized data from a proportion of 0% to 50% in steps of 10% and (2) keeping the number of real images the same and adding a certain proportion of synthetic images from 0% to 50% in steps of 10%. The first experiment evaluates how similar the synthetic and real data are and the second evaluates the image synthesis potential of the synthetic data. We consider three categories in the assessment: groundglass opacity, consolidation, and infection (which considers both ground-glass opacity and consolidation). The evaluation results for the two experiments are presented in Table III and Table IV , respectively. The pre-trained Unet model is then tested with a fixed real CT image dataset. 10,220 images from the test set are divide equally into 10 folds, the evaluation results are reported with the format as MEAN ± 95% CON-FIDENCE INTERVAL among above folds. In Table III , we describe the experimental results of different replacing ratios of synthetic data. We can obtain the best performance when using pure real data as a training set. By replacing the real data with a ratio of synthetic data, the semantic segmentation performance of Unet does not decrease and stay at a stable level. By replacing real data with 30% synthetic data, the Unet obtains the best performance on the Spec metric for ground-glass opacity focus, also it gets the best performance on Dice and Sen metrics for consolidation focus. The experimental results from Table III show that synthetic CT images are similar to real CT images. They are realistic enough even replacing the real data with a large ratio of synthetic data, the semantic segmentation performance of Unet still seems promising. Table III presents the experimental results for different replacement ratios for the synthetic data. We obtain the best performance when using pure real data as the training set. By replacing the real data with a proportion of synthetic data, the semantic segmentation performance of Unet does not decrease, but rather remains stable. By replacing real data with 30% synthetic data, Unet obtains the best performance for the Spec metric for ground-glass opacity and for the Dice and Sen metrics for consolidation. The experimental results thus indicate that the synthetic CT images are similar to real CT images. They are sufficiently realistic for semantic segmentation with Unet to be successful even when real data is replaced with a large proportion of synthetic data. Table IV presents the semantic segmentation results when a certain proportion of extra synthetic data is added to the real data. The best performance is obtained when adding 40% synthetic data. Overall, the results indicate that the synthetic CT images are sufficiently diverse and realistic, meaning that they have the potential to be utilized in image synthesis to improve the dataset quality for deep-learning-based COVID-19 diagnosis. To intuitively demonstrate synthetic results and easily compare them with the results from other state-of-the-art image synthesis methods, we show the synthetic examples in both Figure 6 and Figure 7 in this subsection. The synthetic images from three individual cases are compared in Figure 6 . The first case shows that a consolidation infection area locates on the lower left of CT image. By comparing the synthetic results from the proposed method, SEAN [15] and SPADE [14] , the infection area remains the original structure and texture in the result which is generated by the proposed method, however, we found that in the results of SEAN and SPADE, there some unnatural artifacts (holes) are generated in the position that yellow arrow points out. For the second case, a large area of ground-glass infection is detected, the results of SPADE and SEAN ignore some small lung area in the middle of the infection area, but the proposed method can still reflect above small lung area correctly. In the final case, it contains both two categories of infection area: consolidation and ground-glass opacity, and the ground-glass opacity are surrounded by the consolidation area. If we focus on the surrounded area, we can found out that the boundary of two infection area is not clear in the synthetic image of SEAN, and the ground-glass area are mistakenly generated as lung area in the synthesized image of SPADE. The result of the proposed method in case 3 shows that it has the ability to handle this complex situation and produce realistic synthetic CT images with high image quality. We present some synthetic examples that are generated by the proposed method in Figure 7 . We select one example for each patient (8 samples from 9 patients; patient #3 is skipped because the segmentation maps were miss-labeled). For Patient #0, the consolidation area is located at the bottom of the lung area; the synthetic image shows a sharp and high-contrast consolidation area that can be easily distinguished from the surrounding non-lung region. The slides for Patients #1 and #4 have a similarity in that the lung area contains widespread ground-glass opacity. Consolidation is sporadically located within this ground-glass opacity. The small consolidation area can be easily identified due to the clear boundary between the two infection areas. Patient #6 shows ground-glass opacity and consolidation that are distant from each other. The results thus illustrate that the proposed method can handle the two types of infection areas together in a single lung CT image. The CT slides of Patients #5, #7, and #8 show the simplest cases, with only a single category of infection (ground-glass opacity). The experimental results thus indicate that realistic groundglass opacity can be obtained using the proposed method. In this paper, we proposed a cGAN-based COVID-19 CT image synthesis method that can generate realistic CT images that include the two main infection types, groundglass opacity, and consolidation. The proposed method takes the semantic segmentation map of a corresponding lung CT image, and the cGAN structure learns the characteristics and information of the CT image. A global-local generator and a multi-resolution discriminator are employed to effectively balance global information with local details in the CT image. The experimental results show that the proposed method is able to generate realistic synthetic CT images and achieve state-of-the-art performance in terms of image quality when compared with common image synthesis approaches. In addition, the evaluation results for semantic segmentation performance show that the high image quality and fidelity of the synthetic CT images enable their use in image synthesis for COVID-19 diagnosis using AI models. For future research, the authors plan to fully utilize high-quality synthetic COVID-19 CT images to improve specific computer vision approaches that can help in the fight against COVID-19, such as lung CT image semantic segmentation and rapid lung CT image-based COVID-19 diagnosis. Fig. 6 . Synthetic lung CT images generated by the proposed method and the other two competitive state-of-the-art image synthesis approaches. The first column shows the segmentation map including the lung (red), ground-glass opacity (blue), and consolidation (green) areas. The second column shows the original CT image. The third, fourth, fifth columns show the synthetic samples which are generated by the proposed method, SEAN [15] and SPADE [14] in order. Each case is presented with zoom in order to show more details, and the yellow arrows point out the special area which is described in the main text. Fig. 7 . Synthetic lung CT images generated by the proposed method. Eight samples are selected, each from an individual patient. The first column shows the segmentation map including the lung (red), ground-glass opacity (blue), and consolidation (green) areas. The second and third columns show the original and synthetic CT images, respectively. The synthetic CT images here merge the synthetic lung CT image and the corresponding real non-lung area. The fourth and fifth columns depict CT images for the original lung and synthesized CT images, respectively. Coronavirus disease (covid-19) pandemic Johns hopkins coronavirus resource center Clinical characteristics of coronavirus disease 2019 in china Interim clinical guidance for management of patients with confirmed coronavirus disease (covid-19) Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases Sensitivity of chest ct for covid-19: comparison to rt-pcr Holistic and comprehensive annotation of clinically significant findings on diverse ct images: learning from radiology reports and label ontology Toothnet: automatic tooth instance segmentation and identification from cone beam ct images Reducing uncertainty in undersampled mri reconstruction with active acquisition X2ct-gan: reconstructing ct from biplanar x-rays with generative adversarial networks Generative adversarial nets Image-to-image translation with conditional adversarial networks High-resolution image synthesis and semantic manipulation with conditional gans Semantic image synthesis with spatially-adaptive normalization Sean: Image synthesis with semantic region-adaptive normalization Deep photo enhancer: Unpaired learning for image enhancement from photographs with gans Bringing old photos back to life 3d human pose estimation in the wild by adversarial learning Pose guided person image generation Video-to-video synthesis Few-shot video-to-video synthesis Conditional generative adversarial nets Conditional image synthesis with auxiliary classifier gans Coco-stuff: Thing and stuff classes in context Which training methods for gans do actually converge Attngan: Fine-grained text to image generation with attentional generative adversarial networks Inferring semantic layout for hierarchical text-to-image synthesis Unpaired image-to-image translation using cycle-consistent adversarial networks Toward multimodal image-to-image translation Unsupervised image-to-image translation networks Multimodal unsupervised image-to-image translation Image generation from layout Diagnosis of coronavirus disease 2019 (covid-19) with structured latent multi-view representation learning Clinically applicable ai system for accurate diagnosis, quantitative measurements, and prognosis of covid-19 pneumonia using computed tomography Application of deep learning technique to manage covid-19 in routine clinical practice using ct images: Results of 10 convolutional neural networks Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct An automatic covid-19 ct segmentation based on u-net with attention mechanism Relational modeling for robust and efficient pulmonary lobe segmentation in ct scans Deep learning models for covid-19 infected area segmentation in ct images Residual attention u-net for automated multi-class segmentation of covid-19 chest ct images Inf-net: Automatic covid-19 lung infection segmentation from ct images Deep learning-based detection for covid-19 from chest ct using weak label Coronavirus detection and analysis on chest ct with deep learning Weakly supervised deep learning for covid-19 infection detection and classification from ct images Evaluation of ct image synthesis methods: From atlas-based registration to deep learning A novel computed tomography image synthesis method for correcting the spectrum dependence of ct numbers Perceptual losses for real-time style transfer and super-resolution Covid-19 ct segmentation dataset Ct imaging features of 2019 novel coronavirus (2019-ncov) Gans trained by a two time-scale update rule converge to a local nash equilibrium Image quality metrics: Psnr vs. ssim Evaluation of segmentation algorithms for medical imaging V-net: Fully convolutional neural networks for volumetric medical image segmentation U-net: Convolutional networks for biomedical image segmentation