Advertisement

Deep learning methods to generate synthetic CT from MRI in radiotherapy: A literature review

Published:August 30, 2021DOI:https://doi.org/10.1016/j.ejmp.2021.07.027

      Highlights

      • Review of deep learning approaches to generate synthetic-CTs for MRI-based dose calculation.
      • Overview and discussion of image and dose metrics for synthetic-CT evaluation.
      • Review of synthetic-CT image and dose accuracy per anatomical localization.

      Abstract

      Purpose

      In radiotherapy, MRI is used for target volume and organs-at-risk delineation for its superior soft-tissue contrast as compared to CT imaging. However, MRI does not provide the electron density of tissue necessary for dose calculation. Several methods of synthetic-CT (sCT) generation from MRI data have been developed for radiotherapy dose calculation. This work reviewed deep learning (DL) sCT generation methods and their associated image and dose evaluation, in the context of MRI-based dose calculation.

      Methods

      We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021. For each paper, several items were screened and compiled in figures and tables.

      Results

      This review included 57 studies. The DL methods were either generator-only based (45% of the reviewed studies), or generative adversarial network (GAN) architecture and its variants (55% of the reviewed studies). The brain and pelvis were the most commonly investigated anatomical localizations (39% and 28% of the reviewed studies, respectively), and more rarely, the head-and-neck (H&N) (15%), abdomen (10%), liver (5%) or breast (3%). All the studies performed an image evaluation of sCTs with a diversity of metrics, with only 36 studies performing dosimetric evaluations of sCT.

      Conclusions

      The median mean absolute errors were around 76 HU for the brain and H&N sCTs and 40 HU for the pelvis sCTs. For the brain, the mean dose difference between the sCT and the reference CT was <2%. For the H&N and pelvis, the mean dose difference was below 1% in most of the studies. Recent GAN architectures have advantages compared to generator-only, but no superiority was found in term of image or dose sCT uncertainties. Key challenges of DL-based sCT generation methods from MRI in radiotherapy is the management of movement for abdominal and thoracic localizations, the standardization of sCT evaluation, and the investigation of multicenter impacts.

      Keywords

      Introduction

      In radiation therapy, computed tomography (CT) is the standard imaging modality for treatment planning. Magnetic resonance imaging (MRI) is a complementary modality to CT providing better soft-tissue contrast without irradiation. MRI improves the delineation accuracy of the target volume and/or organs at risk (OARs) in the brain, head-and-neck (H&N), and lung or prostate radiotherapy [
      • Pathmanathan A.U.
      • McNair H.A.
      • Schmidt M.A.
      • Brand D.H.
      • Delacroix L.
      • Eccles C.L.
      • et al.
      Comparison of prostate delineation on multimodality imaging for MR-guided radiotherapy.
      ,

      Kerkmeijer LGW, Maspero M, Meijer GJ, van der Voort van Zyp JRN, de Boer HCJ, van den Berg CAT. Magnetic Resonance Imaging only Workflow for Radiotherapy Simulation and Planning in Prostate Cancer. Clin. Oncol. 2018;30:692–701. 10.1016/j.clon.2018.08.009.

      ,
      • Jonsson J.
      • Nyholm T.
      • Söderkvist K.
      The rationale for MR-only treatment planning for external radiotherapy.
      ]. However, MRI does not provide information on the electron density of the tissue, requires for accurate dose calculation. Most of the literature has proposed the generation of synthetic-CT (sCT) images for MRI-based dose planning. sCT (or pseudo-CT) is a synthetic image in Hounsfield Units (HU) generated from MRI data.
      The methods for generating sCTs can be divided into three categories: bulk density, atlas-based and machine learning (ML) methods (including classical ML methods and deep learning methods [DLMs]). The bulk density methods consist of segmenting MRI images into several classes (usually air, soft-tissue, and bone). Each of these delineated volumes is assigned a homogeneous electron density, and the dose can then be calculated. This method has several drawbacks: it is tedious, time-consuming, operator-dependent, and does not consider tissue heterogeneity [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Lafond C.
      • Greer P.B.
      • Dowling J.A.
      • et al.
      Pseudo-CT Generation for MRI-Only Radiation Therapy Treatment Planning: Comparison Among Patch-Based, Atlas-Based, and Bulk Density Methods.
      ,
      • Dowling J.A.
      • Sun J.
      • Pichler P.
      • Rivest-Hénault D.
      • Ghose S.
      • Richardson H.
      • et al.
      Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences.
      ,
      • Cusumano D.
      • Placidi L.
      • Teodoli S.
      • Boldrini L.
      • Greco F.
      • Longo S.
      • et al.
      On the accuracy of bulk synthetic CT for MR-guided online adaptive radiotherapy.
      ,
      • Choi J.H.
      • Lee D.
      • O’Connor L.
      • Chalup S.
      • Welsh J.S.
      • Dowling J.
      • et al.
      Bulk Anatomical Density Based Dose Calculation for Patient-Specific Quality Assurance of MRI-Only Prostate Radiotherapy.
      ,
      • Kemppainen R.
      • Suilamo S.
      • Ranta I.
      • Pesola M.
      • Halkola A.
      • Eufemio A.
      • et al.
      Assessment of dosimetric and positioning accuracy of a magnetic resonance imaging-only solution for external beam radiotherapy of pelvic anatomy.
      ]. The atlas-based methods involve complex, non-rigid registrations of one or several co-registered MRI-CT atlases with a target MRI. This registration step is followed by a fusion step to generate the sCT. The drawbacks of this method are the lack of robustness in the case of large anatomical variations and the need for computationally intensive pairwise registrations [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Lafond C.
      • Greer P.B.
      • Dowling J.A.
      • et al.
      Pseudo-CT Generation for MRI-Only Radiation Therapy Treatment Planning: Comparison Among Patch-Based, Atlas-Based, and Bulk Density Methods.
      ,
      • Dowling J.A.
      • Sun J.
      • Pichler P.
      • Rivest-Hénault D.
      • Ghose S.
      • Richardson H.
      • et al.
      Automatic Substitute Computed Tomography Generation and Contouring for Magnetic Resonance Imaging (MRI)-Alone External Beam Radiation Therapy From Standard MRI Sequences.
      ,
      • Chen S.
      • Quan H.
      • Qin A.
      • Yee S.
      • Yan D.
      MR image-based synthetic CT for IMRT prostate treatment planning and CBCT image-guided localization.
      ,
      • Huynh T.
      • Gao Y.
      • Kang J.
      • Wang L.
      • Zhang P.
      • Lian J.
      • et al.
      Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model.
      ]. Among the classical ML methods, the patch-based methods (such as [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Lafond C.
      • Greer P.B.
      • Dowling J.A.
      • et al.
      Pseudo-CT Generation for MRI-Only Radiation Therapy Treatment Planning: Comparison Among Patch-Based, Atlas-Based, and Bulk Density Methods.
      ]) can be decomposed into four steps. The first step is interpatient rigid or affine registration with MR images. These methods involve inter-patient registration, feature extraction, and patch partitioning during the training step. The training patches closest to the patches of the target MRI are then selected for aggregation to generate the sCT [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Lafond C.
      • Greer P.B.
      • Dowling J.A.
      • et al.
      Pseudo-CT Generation for MRI-Only Radiation Therapy Treatment Planning: Comparison Among Patch-Based, Atlas-Based, and Bulk Density Methods.
      ]. The main drawbacks of this method are the imprecise interpatient registration and calculation time.
      DLMs are models comprising multiple processing layers that learn multiscale representations of data through multiple levels of abstraction [
      • LeCun Y.
      • Bengio Y.
      • Hinton G.
      Deep learning.
      ]. These methods have recently been introduced in radiotherapy for applications, including image segmentation, image processing and reconstruction, image registration, treatment planning, and radiomics [
      • Meyer P.
      • Noblet V.
      • Mazzara C.
      • Lallement A.
      Survey on deep learning for radiotherapy.
      ,
      • Sahiner B.
      • Pezeshk A.
      • Hadjiiski L.M.
      • Wang X.
      • Drukker K.
      • Cha K.H.
      • et al.
      Deep learning in medical imaging and radiation therapy.
      ,
      • Jarrett D.
      • Stride E.
      • Vallis K.
      • Gooding M.J.
      Applications and limitations of machine learning in radiation oncology.
      ,

      Shen C, Nguyen D, Zhou Z, Jiang SB, Dong B, Jia X. An introduction to deep learning in medical physics: advantages, potential, and challenges. Phys Med Biol 2020;65:05TR01. 10.1088/1361-6560/ab6f51.

      ,

      Boldrini L, Bibault J-E, Masciocchi C, Shen Y, Bittner M-I. Deep Learning: A Review for the Radiation Oncologist. Front Oncol 2019;9. 10.3389/fonc.2019.00977.

      ,
      • Feng M.
      • Valdes G.
      • Dixit N.
      • Solberg T.D.
      Machine Learning in Radiation Oncology: Opportunities, Requirements, and Needs.
      ,
      • Bibault J.-E.
      • Giraud P.
      • Burgun A.
      Big Data and machine learning in radiation oncology: State of the art and future prospects.
      ,
      • Thompson R.F.
      • Valdes G.
      • Fuller C.D.
      • Carpenter C.M.
      • Morin O.
      • Aneja S.
      • et al.
      Artificial intelligence in radiation oncology: A specialty-wide disruptive transformation?.
      ]. DLMs have been proposed for sCT generation from MRI. They were trained to model the relationships between HU CT values and MRI intensities. Once the optimal DL parameters are estimated, the model can be applied to a test MRI to generate its corresponding sCT. DLMs have the advantage of being fast for sCT generation, and some do not require deformable inter-patient registration (only intra-patient registration) such as in [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ].
      Two reviews, both published in 2018, have already summarized sCT generation methods from MRI [
      • Edmund J.M.
      • Nyholm T.
      A review of substitute CT generation for MRI-only radiation therapy.
      ,
      • Johnstone E.
      • Wyatt J.J.
      • Henry A.M.
      • Short S.C.
      • Sebag-Montefiore D.
      • Murray L.
      • et al.
      Systematic Review of Synthetic Computed Tomography Generation Methodologies for Use in Magnetic Resonance Imaging-Only Radiation Therapy.
      ], they focused only on the bulk density, atlas-based, and voxel methods and did not include recent DLMs. Other studies have listed sCT generation methods from MRI in the context of MR-only radiotherapy [

      Kerkmeijer LGW, Maspero M, Meijer GJ, van der Voort van Zyp JRN, de Boer HCJ, van den Berg CAT. Magnetic Resonance Imaging only Workflow for Radiotherapy Simulation and Planning in Prostate Cancer. Clin. Oncol. 2018;30:692–701. 10.1016/j.clon.2018.08.009.

      ,
      • Bird D.
      • Henry A.M.
      • Sebag-Montefiore D.
      • Buckley D.L.
      • Al-Qaisieh B.
      • Speight R.
      A Systematic Review of the Clinical Implementation of Pelvic Magnetic Resonance Imaging (MR)-Only Planning for External Beam Radiation Therapy.
      ,
      • Owrangi A.M.
      • Greer P.B.
      • Glide-Hurst C.K.
      MRI-only treatment planning: benefits and challenges.
      ,

      Wafa B, Moussaoui A. A review on methods to estimate a CT from MRI data in the context of MRI-alone RT. Mèd Technol J 2018;2:150–78. 10.26415/2572-004X-vol2iss1p150-178.

      ]. More recently, Wang et al. [
      • Wang T.
      • Lei Y.
      • Fu Y.
      • Wynne J.F.
      • Curran W.J.
      • Liu T.
      • et al.
      A review on medical imaging synthesis using deep learning and its clinical applications.
      ] proposed a review on medical imaging synthesis using DL and Spadea and Maspero et al. [
      • Spadea M.F.
      • Maspero M.
      • Zaffino P.
      • Seco J.
      Deep learning-based synthetic-CT generation in radiotherapy and PET: a review.
      ] a review on sCT generation with DLM from MR, CBCT and PET images.
      This study aimed to review literature studies using DLMs for MRI-based dose calculation in radiation therapy. This paper reviews the DL networks (with the loss functions), the image and dose endpoints for evaluation and the results per anatomical localization.

      Materials and methods

      We searched the PubMed and ScienceDirect electronic databases from January 2010 to March 2021 (date of first online release) using the following keywords: “deep learning”, “substitute CT” or “pseudo CT” or “computed tomography substitute” or “synthetic CT”, “MRI” or “MR” or “magnetic resonance imaging”, “radiation therapy” or “radiotherapy”. Mesh terms used in PubMed were: “radiotherapy”, “Magnetic Resonance Imaging”, and “deep learning”. The search string on PubMed was: “MRI” AND “radiotherapy” AND (“GAN” OR “CNN” OR “deep learning” OR “machine learning” OR “U-Net” OR “neural network”) NOT “radiomics” NOT “chemotherapy” NOT “brachytherapy” NOT “Positron Emission Tomography Computed Tomography” NOT “chemoradiotherapy” NOT “segmentation” NOT “reconstruction”. We only retained original research papers (no abstract, no review paper) that reported data obtained from humans, were written in English, and addressed DL sCT generation from MRI in radiotherapy.
      For each paper, we screened: anatomical localization, MR device, MR sequence, pre or post-treatment, use of registration, number of patients included in the study, type of DL network, loss functions, number of patients for training step, number of patients for evaluation step, main image and dose evaluation results. Tables per anatomical localization (brain, H&N, breast-liver-abdomen, and pelvis) were created to compile these information.

      Results

      Fig. 1 summarizes the number of DL studies for sCT generation from MRI in radiation therapy per year and anatomical localization. The first study was published in 2016 [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ] and, at the time of manuscript submission, a total of 57 articles meeting the selection criteria had been published. Some studies investigated sCT generation for several anatomical localizations [

      Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, vol. 10435, Cham: Springer International Publishing; 2017, p. 417–25. 10.1007/978-3-319-66179-7_48.

      ,
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ,
      • Cusumano D.
      • Lenkowicz J.
      • Votta C.
      • Boldrini L.
      • Placidi L.
      • Catucci F.
      • et al.
      A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases.
      ,
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ].
      Figure thumbnail gr1
      Fig. 1Numbers of publications on deep learning methods for synthetic-CT generation from MRI in radiation therapy per year and anatomical localization. *: ongoing year (to March 2021), number of studies at the time of publication.
      In total, 24 studies were based on brain data, 9 on H&N data, 2 on breast data, 3 on liver data, 6 on abdomen data, and 18 on pelvic data.

      Common deep learning networks for sCT generation from MRI

      Deep learning, as a mainstream of ML method, uses trainable computational models containing multiple processing components with adjustable parameters to learn a representation of data. Many DL network architectures have been developed, depending on specific applications or learning data. Several reviews have detailed the DL network architectures for radiotherapy or medical imaging [
      • Meyer P.
      • Noblet V.
      • Mazzara C.
      • Lallement A.
      Survey on deep learning for radiotherapy.
      ,
      • Wang T.
      • Lei Y.
      • Fu Y.
      • Wynne J.F.
      • Curran W.J.
      • Liu T.
      • et al.
      A review on medical imaging synthesis using deep learning and its clinical applications.
      ,
      • Spadea M.F.
      • Maspero M.
      • Zaffino P.
      • Seco J.
      Deep learning-based synthetic-CT generation in radiotherapy and PET: a review.
      ,
      • Kazeminia S.
      • Baur C.
      • Kuijper A.
      • van Ginneken B.
      • Navab N.
      • Albarqouni S.
      • et al.
      GANs for Medical Image Analysis.
      ,
      • Litjens G.
      • Kooi T.
      • Bejnordi B.E.
      • Setio A.A.A.
      • Ciompi F.
      • Ghafoorian M.
      • et al.
      A survey on deep learning in medical image analysis.
      ,

      Zhou SK, Greenspan H, Davatzikos C, Duncan JS, van Ginneken B, Madabhushi A, et al. A review of deep learning in medical imaging: Image traits, technology trends, case studies with progress highlights, and future promises. ArXiv:200809104 [Cs, Eess] 2020.

      ,
      • Shen C.
      • Nguyen D.
      • Zhou Z.
      • Jiang S.B.
      • Dong B.
      • Jia X.
      An introduction to deep learning in medical physics: advantages, potential, and challenges.
      ]. The DL architecture for sCT generation from MRI can be roughly divided into two classes: generator-only and generative adversarial network (GAN) and its variants (such conditional-GAN, Least square GAN and cycle-GAN). Fig. 2 shows the hierarchy of the DL architectures.
      Figure thumbnail gr2
      Fig. 2Hierarchy of deep learning architectures. Deep learning (DL) architectures can roughly be divided into generator-only and generative adversarial network (GAN). In generator-only different DL architectures are included such as deep convolutional neural network (DCNN), deep embedding CNN (DECNN), fully CNN (FCNN), U-Net, or atrous spatial pyramid pooling (ASPP). GAN family includes GAN, and its most popular variants: Least Square GAN (LS-GAN) conditional GAN (cGAN), and cycle-GAN.

      Generator-only models

      • i.
        Basic concepts of convolutional neural networks (CNN)
      For image applications, a convolutional neural network (CNN, or ConvNet) is a popular class of deep neural networks using a set of convolution kernels/filters for detecting image features. A CNN consists of an input layer, multiple hidden layers and an output layer. The hidden layers include layers that perform convolutions with trainable kernels. Nonlinear activation functions (Rectified Linear Units (ReLU) [
      • Nair V.
      • Hinton G.
      Rectified linear units improve restricted boltzmann machines.
      ], Leaky-RELU [
      • Maas A.L.
      • Hannun A.Y.
      • Rectifier Ng AY.
      ], Parametric-ReLU (PreLU) or exponential linear unit (ELU) [
      • Clevert D.A.
      • Unterthiner T.
      • Hochreiter S.
      Fast and accurate deep network learning by exponential linear units (ELUs).
      ]) play a crucial role in discriminative capabilities of the deep neural networks. The ReLU layer preserves the input otherwise is the most commonly used activation layer due to its computational simplicity, representational sparsity, and linearity. It is commonly to periodically insert a pooling layer between successive convolutional layers in a CNN architecture. Pooling layers allow to reduce the dimension (subsampling) of the feature maps. These maps are generated by following the convolutional operations. The pooling methods performs down-sampling by dividing the input into rectangular pooling regions and computing the average, the maximum, or the minimum of each region represented by the filter (mean pooling, max-pooling, min-pooling). Batch normalization [
      • Ioffe S.
      • Szegedy C.
      Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift.
      ] layers are inserted after a convolutional or fully connected layer to improve the convergence of the loss function during gradient descent (optimizer). It prevents the problem of vanishing gradient from arising and significantly reduces the time required for network convergence. After several convolution and pooling layers, the CNN generally ends with several fully connected layers. Dropout is one of the most promising techniques for regularization of CNN. Softmax layer is typically the final output layer in a neural network that performs multi-class classification (for example: object recognition).
      • ii.
        Generator-only models
      The generator model can be considered as representing a complex end-to-end mapping function that transforms an input MR image to its corresponding CT image. During the training phase, the generator tries to minimize an objective function called a loss function (voxel-wise loss function LG), which is an intensity-based similarity measurement between the generated image (sCT) and the corresponding ground truth image (real CT). Fig. 3 presents the global architecture of generator-only model.
      Figure thumbnail gr3
      Fig. 3Illustration of generator-only model. *: Generator model varies according to networks. The generator models often based on convolution encoder-decoder networks (CED) are trained to produce synthetic CTs (sCTs) from MRI. For this purpose, a single loss function LG between MRI and registered CT images is computed. In the testing step, for a new given test patient, the MRI goes through the trained network to obtain the corresponding sCT.
      In sCT generation from MRI, the generator architectures are generally based on convolution encoder-decoder networks (CED). In the literature, the variants of generator model include deep CED network [
      • Liu F.
      • Yadav P.
      • Baschnagel A.M.
      • McMillan A.B.
      MR-based treatment planning in radiation therapy using a deep learning approach.
      ], deep embedding CNN (DECNN) or Embedded Net [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ], fully convolutional network (FCN) [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ], U-Net [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ,
      • Liu F.
      • Yadav P.
      • Baschnagel A.M.
      • McMillan A.B.
      MR-based treatment planning in radiation therapy using a deep learning approach.
      ,

      Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain pseudo Computed Tomography generated from deep learning for MRI-only radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 2020:S0360301620311305. 10.1016/j.ijrobp.2020.05.006.

      ,
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ,
      • Spadea M.F.
      • Pileggi G.
      • Zaffino P.
      • Salome P.
      • Catana C.
      • Izquierdo-Garcia D.
      • et al.
      Deep Convolution Neural Network (DCNN) Multiplane Approach to Synthetic CT Generation From MR images—Application in Brain Proton Therapy.
      ,

      Wang Y, Liu C, Zhang X, Deng W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front Oncol 2019;9. 10.3389/fonc.2019.01333.

      ,
      • Arabi H.
      • Dowling J.A.
      • Burgos N.
      • Han X.
      • Greer P.B.
      • Koutsouvelis N.
      • et al.
      Comparative study of algorithms for synthetic CT generation from MRI : Consequences for MRI -guided radiation planning in the pelvic region.
      ,
      • Gupta D.
      • Kim M.
      • Vineberg K.A.
      • Balter J.M.
      Generation of Synthetic CT Images From MRI for Treatment Planning and Patient Positioning Using a 3-Channel U-Net Trained on Sagittal Images.
      ,
      • Dinkla A.M.
      • Florkow M.C.
      • Maspero M.
      • Savenije M.H.F.
      • Zijlstra F.
      • Doornaert P.A.H.
      • et al.
      Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network.
      ,
      • Qi M.
      • Li Y.
      • Wu A.
      • Jia Q.
      • Li B.
      • Sun W.
      • et al.
      Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
      ,
      • Chen S.
      • Qin A.
      • Zhou D.
      • Yan D.
      Technical Note: U-net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning.
      ,

      Florkow MC, Zijlstra F, M.d LGWK, Maspero M, Berg CAT van den, Stralen M van, et al. The impact of MRI-CT registration errors on deep learning-based synthetic CT generation. Medical Imaging 2019: Image Processing, vol. 10949, International Society for Optics and Photonics; 2019, p. 1094938. 10.1117/12.2512747.

      ,
      • Florkow M.C.
      • Zijlstra F.
      • Willemsen K.
      • Maspero M.
      • van den Berg C.A.T.
      • Kerkmeijer L.G.W.
      • et al.
      Deep learning–based MR-to-CT synthesis: The influence of varying gradient echo–based MR images as input channels.
      ,

      Stadelmann JV, Schulz H, Heide UA van der, Renisch S. Pseudo-CT image generation from mDixon MRI images using fully convolutional neural networks. Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 10953, International Society for Optics and Photonics; 2019, p. 109530Z. 10.1117/12.2512741.

      ,
      • Neppl S.
      • Landry G.
      • Kurz C.
      • Hansen D.C.
      • Hoyle B.
      • Stöcklein S.
      • et al.
      Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans.
      ,
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,

      Li W, Li Y, Qin W, Liang X, Xu J, Xiong J, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant Imaging Med Surg 2020;10:1223–36. 10.21037/qims-19-885.

      ,
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ,
      • Kazemifar S.
      • Barragán Montero A.M.
      • Souris K.
      • Rivas S.T.
      • Timmerman R.
      • Park Y.K.
      • et al.
      Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
      ], efficient CNN (eCNN) model [
      • Badrinarayanan V.
      • Kendall A.
      • Cipolla R.
      SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.
      ], ResNet [
      • He K.
      • Zhang X.
      • Ren S.
      • Sun J.
      Deep Residual Learning for Image Recognition.
      ], SE-ResNet [
      • He K.
      • Zhang X.
      • Ren S.
      • Sun J.
      Deep Residual Learning for Image Recognition.
      ,
      • Emami H.
      • Dong M.
      • Nejad-Davarani S.P.
      • Glide-Hurst C.K.
      Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
      ], and DenseNet [
      • Huang G.
      • Liu Z.
      • Maaten V.D.
      • Weinberger K.Q.
      Densely Connected Convolutional Networks.
      ]. Fig. 4 presents some architectures of CED-based generators (Fig. 4).
      Figure thumbnail gr4
      Fig. 4Representation of generator architecture for U-Net
      [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ]
      and adapted implementations of DenseNet
      [
      • Huang G.
      • Liu Z.
      • Maaten V.D.
      • Weinberger K.Q.
      Densely Connected Convolutional Networks.
      ]
      , SE-ResNet
      [
      • Hu J.
      • Shen L.
      • Albanie S.
      • Sun G.
      • Wu E.
      Squeeze-and-Excitation Networks.
      ]
      , and Embedded Net
      [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ]
      . The size of the boxes indicates the relative resolutions of the feature maps. The green boxes represent convolutional layers and orange boxes represent transposed convolutional layers. Yellow, red and blue boxes represent the SE-ResNet, DenseNet, and Embedded blocks. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
      The CED network consists of a paired encoder and decoder networks. CED have been extensively used in DL literature thanks its excellent performance. In the encoding part, low-level feature maps are down-sampled to high-level feature maps. In the decoding part, the high-level feature maps are upsampled to low-level feature maps using the transposed convolutional layer to construct the prediction image (sCT).
      The encoder network uses a set of combined 2D convolution filtering (no dilated convolutions) for detecting image features, followed by normalization (instance [
      • Ulyanov D.
      • Vedaldi A.
      • Lempitsky V.
      Instance Normalization: The Missing Ingredient for Fast Stylization.
      ] or batch normalization [
      • Ioffe S.
      • Szegedy C.
      Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift.
      ]), a nonlinear activation function (ReLU [
      • Nair V.
      • Hinton G.
      Rectified linear units improve restricted boltzmann machines.
      ], LeakyRELU [
      • Maas A.L.
      • Hannun A.Y.
      • Rectifier Ng AY.
      ], or PreLU), and max-pooling.
      The decoder path combines the feature and spatial information through a sequence of symmetrical transpose convolutional layers (up-convolutions), up-sampling operators, concatenate layer (concatenations with high-resolution features), and convolutional layers with a ReLU activation function.
      The most well-known and popular CED variants for biomedical image applications is the U-shaped CNN (U-Net) architecture proposed by Ronneberger et al. [

      Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv:150504597 [Cs] 2015.

      ]. The U-Net [

      Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv:150504597 [Cs] 2015.

      ] has a CED structure with direct skip connections between the encoder and decoder. Han et al. were the first to publish a sCT study with a U-Net architecture [
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ] that is similar to Ronneberger’s model. This 2D U-net model directly learns a mapping function to convert a 2D MR grayscale image to its corresponding 2D sCT image. Han et al. study [
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ] differs from the original U-net since the three fully connected layers were removed. Thus, the number of parameters is reduced by 90%, and the final model is easier to train. In Wang et al. [

      Wang Y, Liu C, Zhang X, Deng W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front Oncol 2019;9. 10.3389/fonc.2019.01333.

      ], the U-net model used batch normalization [
      • Ioffe S.
      • Szegedy C.
      Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift.
      ] and leaky ReLU, which was different from the classical U-net [

      Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv:150504597 [Cs] 2015.

      ].
      The DECNN model proposed by Xiang et al. [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ] is derived by inserting multiple embedding blocks into the U-net architecture. This embedding strategy helps to backpropagate the gradients in the CNN and also provides easier and more effective training of the end-to-end mapping from MR to CT with faster convergence.
      The efficient CNN (eCNN) model [
      • Badrinarayanan V.
      • Kendall A.
      • Cipolla R.
      SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.
      ] was built based on the encoder-decoder networks in the U-Net model [

      Ronneberger O, Fischer P, Brox T. U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv:150504597 [Cs] 2015.

      ] where the convolutional layers were replaced with the building structures (aiming at extracting image features from the input MRI).
      Some generative models use dilated convolutions called “atrous convolution” (rather than conventional convolutions) that expands the receptive field without loss of resolution or coverage [

      Wolterink JM, Leiner T, Viergever MA, Išgum I. Dilated Convolutional Neural Networks for Cardiovascular MR Segmentation in Congenital Heart Disease. Reconstruction, Segmentation, and Analysis of Medical Images, RAMBO 2016, HVSMR 2016 Lecture Notes in Computer Science 2017;10129:95–102. 10.1007/978-3-319-52280-7_9.

      ]. Wolterink et al. [

      Wolterink JM, Leiner T, Viergever MA, Išgum I. Dilated Convolutional Neural Networks for Cardiovascular MR Segmentation in Congenital Heart Disease. Reconstruction, Segmentation, and Analysis of Medical Images, RAMBO 2016, HVSMR 2016 Lecture Notes in Computer Science 2017;10129:95–102. 10.1007/978-3-319-52280-7_9.

      ] used a dilated CNN capturing larger anatomical context to differentiate between tissues with similar intensities on MR.
      The ResNet architecture [
      • He K.
      • Zhang X.
      • Ren S.
      • Sun J.
      Deep Residual Learning for Image Recognition.
      ] has three convolutional layers (containing convolution operations, a batch normalization layer, a ReLU) activation function, followed by nine residual blocks (containing convolutional layers, batch normalization layers, and ReLU activation function) with fully connected layers. HighRes-net [
      • Li W.
      • Wang G.
      • Fidon L.
      • Ourselin S.
      • Cardoso M.J.
      • Vercauteren T.
      On the Compactness, Efficiency, and Representation of 3D Convolutional Networks: Brain Parcellation as a Pretext Task.
      ] consists of a CED architecture with residual connections, normalization layers, and rectified linear unit (ReLU) activations [
      • Nair V.
      • Hinton G.
      Rectified linear units improve restricted boltzmann machines.
      ] using high-resolution ground truth (no pooling layers) as supervision with few trainable parameters [

      Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain pseudo Computed Tomography generated from deep learning for MRI-only radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 2020:S0360301620311305. 10.1016/j.ijrobp.2020.05.006.

      ]. The atrous spatial pyramid pooling (ASPP) generator [
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ] employs atrous or dilated convolution and is implemented in a similar U-Net architecture. The ASPP module permits a reduction in the total number of trainable parameters (almost divided by 4).
      FCN better preserves the neighborhood information in the generated sCT images [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ]. Compared to the conventional CNN, the pooling layers are not used in this task of image-to-image translation [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ]. FCNs can simplify and speed network learning and inference and make the learning problem much easier. However, Fully connected layers are incredibly computationally expensive.
      The deep CED network [
      • Liu F.
      • Yadav P.
      • Baschnagel A.M.
      • McMillan A.B.
      MR-based treatment planning in radiation therapy using a deep learning approach.
      ] consists of a combined encoder network (the popular Visual Geometry Group [VGG] 16-layer net model) and a decoder network (reversed VGG16) with multiple symmetrical shortcut connections between layers.
      Twenty-nine state-of-the-art sCT image generation methods have adopted a generator-only network [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ,

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ,
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ,
      • Liu F.
      • Yadav P.
      • Baschnagel A.M.
      • McMillan A.B.
      MR-based treatment planning in radiation therapy using a deep learning approach.
      ,

      Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain pseudo Computed Tomography generated from deep learning for MRI-only radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 2020:S0360301620311305. 10.1016/j.ijrobp.2020.05.006.

      ,
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ,
      • Spadea M.F.
      • Pileggi G.
      • Zaffino P.
      • Salome P.
      • Catana C.
      • Izquierdo-Garcia D.
      • et al.
      Deep Convolution Neural Network (DCNN) Multiplane Approach to Synthetic CT Generation From MR images—Application in Brain Proton Therapy.
      ,

      Wang Y, Liu C, Zhang X, Deng W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front Oncol 2019;9. 10.3389/fonc.2019.01333.

      ,
      • Arabi H.
      • Dowling J.A.
      • Burgos N.
      • Han X.
      • Greer P.B.
      • Koutsouvelis N.
      • et al.
      Comparative study of algorithms for synthetic CT generation from MRI : Consequences for MRI -guided radiation planning in the pelvic region.
      ,
      • Gupta D.
      • Kim M.
      • Vineberg K.A.
      • Balter J.M.
      Generation of Synthetic CT Images From MRI for Treatment Planning and Patient Positioning Using a 3-Channel U-Net Trained on Sagittal Images.
      ,
      • Dinkla A.M.
      • Florkow M.C.
      • Maspero M.
      • Savenije M.H.F.
      • Zijlstra F.
      • Doornaert P.A.H.
      • et al.
      Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network.
      ,
      • Qi M.
      • Li Y.
      • Wu A.
      • Jia Q.
      • Li B.
      • Sun W.
      • et al.
      Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
      ,
      • Chen S.
      • Qin A.
      • Zhou D.
      • Yan D.
      Technical Note: U-net-generated synthetic CT images for magnetic resonance imaging-only prostate intensity-modulated radiation therapy treatment planning.
      ,

      Florkow MC, Zijlstra F, M.d LGWK, Maspero M, Berg CAT van den, Stralen M van, et al. The impact of MRI-CT registration errors on deep learning-based synthetic CT generation. Medical Imaging 2019: Image Processing, vol. 10949, International Society for Optics and Photonics; 2019, p. 1094938. 10.1117/12.2512747.

      ,
      • Florkow M.C.
      • Zijlstra F.
      • Willemsen K.
      • Maspero M.
      • van den Berg C.A.T.
      • Kerkmeijer L.G.W.
      • et al.
      Deep learning–based MR-to-CT synthesis: The influence of varying gradient echo–based MR images as input channels.
      ,

      Stadelmann JV, Schulz H, Heide UA van der, Renisch S. Pseudo-CT image generation from mDixon MRI images using fully convolutional neural networks. Medical Imaging 2019: Biomedical Applications in Molecular, Structural, and Functional Imaging, vol. 10953, International Society for Optics and Photonics; 2019, p. 109530Z. 10.1117/12.2512741.

      ,
      • Neppl S.
      • Landry G.
      • Kurz C.
      • Hansen D.C.
      • Hoyle B.
      • Stöcklein S.
      • et al.
      Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans.
      ,
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,

      Li W, Li Y, Qin W, Liang X, Xu J, Xiong J, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant Imaging Med Surg 2020;10:1223–36. 10.21037/qims-19-885.

      ,
      • Dinkla A.M.
      • Wolterink J.M.
      • Maspero M.
      • Savenije M.H.F.
      • Verhoeff J.J.C.
      • Seravalli E.
      • et al.
      MR-Only Brain Radiation Therapy: Dosimetric Evaluation of Synthetic CTs Generated by a Dilated Convolutional Neural Network.
      ,
      • Thummerer A.
      • de Jong B.A.
      • Zaffino P.
      • Meijers A.
      • Marmitt G.G.
      • Seco J.
      • et al.
      Comparison of the suitability of CBCT- and MR-based synthetic CTs for daily adaptive proton therapy in head and neck patients.
      ,

      Massa HA, Johnson JM, McMillan AB. Comparison of deep learning synthesis of synthetic CTs using clinical MRI inputs. Phys Med Biol 2020;65:23NT03. 10.1088/1361-6560/abc5cb.

      ,

      Jeon W, An HJ, Kim J, Park JM, Kim H, Shin KH, et al. Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients. J Radiat Prot Res 2019;44:149–55. 10.14407/jrpr.2019.44.4.149.

      ,
      • Florkow M.C.
      • Guerreiro F.
      • Zijlstra F.
      • Seravalli E.
      • Janssens G.O.
      • Maduro J.H.
      • et al.
      Deep learning-enabled MRI-only photon and proton therapy treatment planning for paediatric abdominal tumours.
      ,
      • Bahrami A.
      • Karimian A.
      • Fatemizadeh E.
      • Arabi H.
      • Zaidi H.
      A new deep convolutional neural network design with efficient learning capability: Application to CT image synthesis from MRI.
      ,
      • Liu L.
      • Johansson A.
      • Cao Y.
      • Dow J.
      • Lawrence T.S.
      • Balter J.M.
      Abdominal synthetic CT generation from MR Dixon images using a U-net trained with ‘semi-synthetic’ CT data.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Shafai-Erfani G.
      • Wang T.
      • Tian S.
      • et al.
      Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
      ,
      • Fu J.
      • Yang Y.
      • Singhrao K.
      • Ruan D.
      • Chu F.-I.
      • Low D.A.
      • et al.
      Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging.
      ,
      • Palmér E.
      • Karlsson A.
      • Nordström F.
      • Petruson K.
      • Siversson C.
      • Ljungberg M.
      • et al.
      Synthetic computed tomography data allows for accurate absorbed dose calculations in a magnetic resonance imaging only workflow for head and neck radiotherapy.
      ]. The loss functions LG evaluating sCT and real CTs used in these generative models are:
      The use of L2 distance as a loss function tends to produce blurry results. Perceptual loss is used to capture the discrepancy between the high frequency components within an image.
      One limitation of generative models based on CNN is that they may lead to blurry results due to generally misalignment between MR and CT [

      Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv:170801155 [Cs] 2017.

      ].

      Generative adversarial network (GAN)

      The following section summarizes GAN-based architectures to generate sCT from MRI. We introduce the GAN architecture and three most popular GAN-based extensions: least squares-GAN, conditional-GAN, and cycle-GAN.
      • i)
        GAN
      The adversarial learning strategy was proposed by Goodfellow et al. [

      Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. ArXiv:14062661 [Cs, Stat] 2014.

      ] to generate better sCT images than previous generator-only models. The original way is to simultaneously train two separate neural networks (Fig. 5), the generator G (one of the generator-only models described in i) and Fig. 4) and the discriminator D. These two neural networks form a two-player min–max game where G tries to produce realistic images to fool D while D tries to distinguish between real and synthetic data [
      • Yi X.
      • Walia E.
      • Babyn P.
      Generative Adversarial Network in Medical Imaging: A Review.
      ,

      Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. ArXiv:14062661 [Cs, Stat] 2014.

      ]. Compared to generator-only models, GAN introduces a data-driven regularizer, the adversarial loss, to ensure that the learned distribution approaches the ground truth.
      Figure thumbnail gr5
      Fig. 5Generative adversarial network (GAN) architecture. GAN consists of two adversarial CNNs. The first CNN, called the generator (such illustrated in ), trained to synthetize images that resemble real images (such as real CT). The second CNN, called discriminator trained to differentiate fake image (synthetic image) from real images (which is considered a binary classification problem). The loss function LD of the discriminator (called adversarial loss) is generally the binary cross-entropy. G and D are trained alternatively and share the same objective function of adversarial loss. The overall loss function LG combined the adversarial loss and a voxel-wise loss function (measuring the similarity between the real CT and synthetic-CT voxels).
      In the original version [

      Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. ArXiv:14062661 [Cs, Stat] 2014.

      ], the discriminator and generator are implemented as multilayer perceptrons (MLPs) and more recently implemented as CNNs. The architecture of the generator is often the conventional U-Net. Another proposed generator architecture in a GAN is ResNet [
      • Emami H.
      • Dong M.
      • Nejad-Davarani S.P.
      • Glide-Hurst C.K.
      Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
      ] which is easy to optimize and can gain accuracy from considerably increased depth. The discriminator of the GAN [

      Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. ArXiv:14062661 [Cs, Stat] 2014.

      ] consists of six convolutional layers with different filter sizes but the same kernel sizes and strides, followed by five fully connected layers. ReLU was used as the activation function and a batch normalization layer for the convolutional layers. The dropout layer was added to the fully connected layers, and a sigmoid activation function was used in the last fully connected layer.
      The discriminator used in [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ] a convolutional “PatchGAN” classifier (markovian discriminator) models high frequency image structure in local patches and only penalizes structure at the scale of image patches.
      Using adversarial loss LD, the classical GAN model can generate high-quality sCT images with less blurry results [

      Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, vol. 10435, Cham: Springer International Publishing; 2017, p. 417–25. 10.1007/978-3-319-66179-7_48.

      ,

      Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv:170801155 [Cs] 2017.

      ] than generator-only models. The discriminator tries to maximize it while the generator tries to minimize it.
      In this review, six studies used classical GAN-based architectures to generate sCT from MRI [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ,

      Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, vol. 10435, Cham: Springer International Publishing; 2017, p. 417–25. 10.1007/978-3-319-66179-7_48.

      ,
      • Emami H.
      • Dong M.
      • Nejad-Davarani S.P.
      • Glide-Hurst C.K.
      Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Shafai-Erfani G.
      • Wang T.
      • Tian S.
      • et al.
      Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
      ,

      Largent A, Marage L, Gicquiau I, Nunes J-C, Reynaert N, Castelli J, et al. Head-and-Neck MRI-only radiotherapy treatment planning: From acquisition in treatment position to pseudo-CT generation. Cancer/Radiothérapie 2020:S1278321820300615. 10.1016/j.canrad.2020.01.008.

      ,
      • Liu X.
      • Emami H.
      • Nejad-Davarani S.P.
      • Morris E.
      • Schultz L.
      • Dong M.
      • et al.
      Performance of deep learning synthetic CTs for MR-only brain radiation therapy.
      ]. The overall loss functions intergrating the adversarial loss function LD and evaluating sCT and the original CTs used in these GANs are :
      The adversarial loss function LD of the discriminator used in these GANs was generally the binary cross-entropy [

      Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, vol. 10435, Cham: Springer International Publishing; 2017, p. 417–25. 10.1007/978-3-319-66179-7_48.

      ].
      Perceptual regularization, used by Largent et al. [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ], helps to prevent images over-smoothing and loss of structure details. The perceptual loss functions are based on high-level features extracted from pre-trained VGG network (7th VGG16 in [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ]).
      As shown by several studies [

      Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, vol. 10435, Cham: Springer International Publishing; 2017, p. 417–25. 10.1007/978-3-319-66179-7_48.

      ,
      • Emami H.
      • Dong M.
      • Nejad-Davarani S.P.
      • Glide-Hurst C.K.
      Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
      ,
      • Ledig C.
      • Theis L.
      • Huszar F.
      • Caballero J.
      • Cunningham A.
      • Acosta A.
      • et al.
      Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network.
      ], (1) the adversarial network prevents the generated images from blurring and better preserve details, especially edge features; (2) the accuracy of sCT within the bone region is increased; and (3) the discriminator detects patch features in both real and fake images, mitigating misregistration problem caused by an imperfect alignment between multi-parametric MRI and CT. General convergence in GANs is heavily dependent on hyperparameter tuning to avoid vanishing [
      • Bengio Y.
      • Simard P.
      • Frasconi P.
      Learning Long-Term Dependencies with Gradient Descent is Difficult.
      ] or exploding gradients, and they are prone to mode collapse. To tackle the training instability of GANs, a plethora of extensions and subclasses have been proposed.
      • ii)
        Least Squares-GAN (LS-GAN)
      Most GANs use the binary cross-entropy as the discriminator loss function. However, this cross-entropy loss function leads to the saturation problem in GANs learning (the well-known problem of vanishing gradients [
      • Bengio Y.
      • Simard P.
      • Frasconi P.
      Learning Long-Term Dependencies with Gradient Descent is Difficult.
      ]). Least square loss function strongly penalized the fake samples away from decision boundary and improve the stability of learning process. Mao et al. [

      Mao X, Li Q, Xie H, Lau RYK, Wang Z, Smolley SP. Least Squares Generative Adversarial Networks. IEEE International Conference on Computer Vision 2017:9.

      ] adopted the least-squares loss function for the discriminator and showed that minimizing the objective function of LS-GAN minimizes the Pearson χ2 divergence [
      • Brou Boni K.N.D.
      • Klein J.
      • Vanquin L.
      • Wagner A.
      • Lacornerie T.
      • Pasquier D.
      • et al.
      MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network.
      ]. Emami et al. [
      • Emami H.
      • Dong M.
      • Nejad-Davarani S.P.
      • Glide-Hurst C.K.
      Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
      ] replaced the negative log-likelihood objective with a least square loss function (L2 loss), which was more stable during training and generated better sCT quality.
      • iii)
        Conditional-GAN (cGAN)
      Since the original GAN allows no explicit control on the actual data generation, Goodfellow et al. [

      Goodfellow IJ, Pouget-Abadie J, Mirza M, Xu B, Warde-Farley D, Ozair S, et al. Generative Adversarial Networks. ArXiv:14062661 [Cs, Stat] 2014.

      ] proposed the conditional GAN (cGAN) to incorporate additional information such as class labels in the synthesis process. cGAN is an extension of the GAN model in which both the generator and the discriminator are conditioned on some additional information. The sCT image output is conditioned on the MR image input.
      Different generator architectures in a cGAN have been proposed, including SE-ResNet [
      • He K.
      • Zhang X.
      • Ren S.
      • Sun J.
      Deep Residual Learning for Image Recognition.
      ,
      • Emami H.
      • Dong M.
      • Nejad-Davarani S.P.
      • Glide-Hurst C.K.
      Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
      ], DenseNet [
      • Huang G.
      • Liu Z.
      • Maaten V.D.
      • Weinberger K.Q.
      Densely Connected Convolutional Networks.
      ], U-Net [
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ,
      • Kazemifar S.
      • Barragán Montero A.M.
      • Souris K.
      • Rivas S.T.
      • Timmerman R.
      • Park Y.K.
      • et al.
      Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
      ], Embedded Net [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ], and the atrous spatial pyramid pooling (ASPP) method [
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ]. Fetty et al. [
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ] evaluated four different generator architectures: SE-ResNet, DenseNet, U-Net, and Embedded Net in a cGAN to generate sCT from T2 MRI. Olberg et al. [
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ] explored two generators: the conventional U-Net architecture implemented in the Pix2Pix framework [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ] and the ASPP method [
      • Chen L.-C.
      • Zhu Y.
      • Papandreou G.
      • Schrof F.
      • Adam H.
      Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation.
      ,
      • Chen L.C.
      • Papandreou G.
      • Schroff F.
      • Adam F.
      Rethinking atrous convolution for semantic image segmentation.
      ]. The discriminator of the GAN framework was similar in both implementations.
      Twenty studies used a cGAN architecture to generate sCT from MRI [
      • Cusumano D.
      • Lenkowicz J.
      • Votta C.
      • Boldrini L.
      • Placidi L.
      • Catucci F.
      • et al.
      A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,
      • Qi M.
      • Li Y.
      • Wu A.
      • Jia Q.
      • Li B.
      • Sun W.
      • et al.
      Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
      ,
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,

      Li W, Li Y, Qin W, Liang X, Xu J, Xiong J, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant Imaging Med Surg 2020;10:1223–36. 10.21037/qims-19-885.

      ,
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ,
      • Kazemifar S.
      • Barragán Montero A.M.
      • Souris K.
      • Rivas S.T.
      • Timmerman R.
      • Park Y.K.
      • et al.
      Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
      ,
      • Brou Boni K.N.D.
      • Klein J.
      • Vanquin L.
      • Wagner A.
      • Lacornerie T.
      • Pasquier D.
      • et al.
      MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network.
      ,
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ,
      • Koike Y.
      • Akino Y.
      • Sumida I.
      • Shiomi H.
      • Mizuno H.
      • Yagi M.
      • et al.
      Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
      ,
      • Maspero M.
      • Savenije M.H.F.
      • Dinkla A.M.
      • Seevinck P.R.
      • Intven M.P.W.
      • Jurgenliemk-Schulz I.M.
      • et al.
      Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy.
      ,
      • Tie X.
      • Lam S.
      • Zhang Y.
      • Lee K.
      • Au K.
      • Cai J.
      Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients.
      ,

      Hemsley M, Chugh B, Ruschin M, Lee Y, Tseng C-L, Stanisz G, et al. Deep Generative Model for Synthetic-CT Generation with Uncertainty Predictions. In: Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, et al., editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, vol. 12261, Cham: Springer International Publishing; 2020, p. 834–44. 10.1007/978-3-030-59710-8_81.

      ,
      • Maspero M.
      • Bentvelzen L.G.
      • Savenije M.H.F.
      • Guerreiro F.
      • Seravalli E.
      • Janssens G.O.
      • et al.
      Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy.
      ,

      Tang B, Wu F, Fu Y, Wang X, Wang P, Orlandini LC, et al. Dosimetric evaluation of synthetic CT image generated using a neural network for MR‐only brain radiotherapy. J Appl Clin Med Phys 2021:acm2.13176. 10.1002/acm2.13176.

      ,
      • Bourbonne V.
      • Jaouen V.
      • Hognon C.
      • Boussion N.
      • Lucia F.
      • Pradier O.
      • et al.
      Dosimetric Validation of a GAN-Based Pseudo-CT Generation for MRI-Only Stereotactic Brain Radiotherapy.
      ,
      • Bird D.
      • Nix M.G.
      • McCallum H.
      • Teo M.
      • Gilbert A.
      • Casanova N.
      • et al.
      Multicentre, deep learning, synthetic-CT generation for ano-rectal MR-only radiotherapy treatment planning.
      ,
      • Peng Y.
      • Chen S.
      • Qin A.
      • Chen M.
      • Gao X.
      • Liu Y.
      • et al.
      Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning.
      ,

      Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

      ]. The overall loss functions LG  intergrating the adversarial loss function LD and evaluating sCT and real CTs used in these cGANs were as follows:
      • adversarial loss function (binary cross entropy) [
        • Kazemifar S.
        • Barragán Montero A.M.
        • Souris K.
        • Rivas S.T.
        • Timmerman R.
        • Park Y.K.
        • et al.
        Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
        ,

        Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

        ],
      • L1-norm (MAE) [
        • Koike Y.
        • Akino Y.
        • Sumida I.
        • Shiomi H.
        • Mizuno H.
        • Yagi M.
        • et al.
        Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
        ],
      • least squares loss function (L2 loss) [
        • Brou Boni K.N.D.
        • Klein J.
        • Vanquin L.
        • Wagner A.
        • Lacornerie T.
        • Pasquier D.
        • et al.
        MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network.
        ,

        Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

        ],
      • mutual information (MI) [
        • Kazemifar S.
        • McGuire S.
        • Timmerman R.
        • Wardak Z.
        • Nguyen D.
        • Park Y.
        • et al.
        MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
        ,
        • Kazemifar S.
        • Barragán Montero A.M.
        • Souris K.
        • Rivas S.T.
        • Timmerman R.
        • Park Y.K.
        • et al.
        Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
        ],
      • focal regression loss [
        • Weber M.
        • Fürst M.
        • Zöllner J.M.
        Automated Focal Loss for Image based Object Detection.
        ] used in [
        • Bird D.
        • Nix M.G.
        • McCallum H.
        • Teo M.
        • Gilbert A.
        • Casanova N.
        • et al.
        Multicentre, deep learning, synthetic-CT generation for ano-rectal MR-only radiotherapy treatment planning.
        ],
      • the combination of adversarial (binary cross-entropy) and L2-norm [
        • Olberg S.
        • Zhang H.
        • Kennedy W.R.
        • Chun J.
        • Rodriguez V.
        • Zoberi I.
        • et al.
        Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
        ],
      • the combination of L1-norm and PatchGAN loss (as proposed by Isola et al. [
        • Isola P.
        • Zhu J.-Y.
        • Zhou T.
        • Efros A.A.
        ]) used in [
        • Qi M.
        • Li Y.
        • Wu A.
        • Jia Q.
        • Li B.
        • Sun W.
        • et al.
        Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
        ,
        • Fetty L.
        • Löfstedt T.
        • Heilemann G.
        • Furtado H.
        • Nesvacil N.
        • Nyholm T.
        • et al.
        Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
        ,
        • Maspero M.
        • Savenije M.H.F.
        • Dinkla A.M.
        • Seevinck P.R.
        • Intven M.P.W.
        • Jurgenliemk-Schulz I.M.
        • et al.
        Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy.
        ],
      • the combination of adversarial (binary cross-entropy) and term derived from the log-likelihood of the Laplace distribution [

        Hemsley M, Chugh B, Ruschin M, Lee Y, Tseng C-L, Stanisz G, et al. Deep Generative Model for Synthetic-CT Generation with Uncertainty Predictions. In: Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, et al., editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, vol. 12261, Cham: Springer International Publishing; 2020, p. 834–44. 10.1007/978-3-030-59710-8_81.

        ],
      • the combination of Lp-norm, adversarial and gradient [

        Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

        ],
      • the combination of multiscale L1-norm, L1 norm and PatchGAN loss [
        • Isola P.
        • Zhu J.-Y.
        • Zhou T.
        • Efros A.A.
        ] used in [
        • Brou Boni K.N.D.
        • Klein J.
        • Vanquin L.
        • Wagner A.
        • Lacornerie T.
        • Pasquier D.
        • et al.
        MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network.
        ].
      The loss functions LD of the discriminator evaluating sCT and real CTs used in these cGANs areas follows:
      The L2-based loss function of the generator can cause image blurring. To alleviate blurriness and improve the prediction accuracy, the L1 norm [

      Wang Y, Liu C, Zhang X, Deng W. Synthetic CT Generation Based on T2 Weighted MRI of Nasopharyngeal Carcinoma (NPC) Using a Deep Convolutional Neural Network (DCNN). Front Oncol 2019;9. 10.3389/fonc.2019.01333.

      ] makes the learning more robust to outliers in the training data, such as noise or other artifacts in the images or due to imperfect matching between MR and CT images. The Markovian Discriminator loss or Patch-GAN loss [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ], which can be understood as a form of texture/style loss, effectively models the image as a Markov random field, assuming independence between pixels separated by more than a patch diameter.
      Pix2Pix proposed by Isola et al. [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ] is a successful cGAN variant for high-resolution image-to-image translation. Pix2Pix model generally uses Unet generator and PatchGAN discriminator. As investigated by Isola et al. [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ], the use of a loss function based on L1 alone leads to reasonable but blurred results; while cGAN alone leads to sharp results but introduces image artifacts. The authors showed that training in an adversarial setting together with an L1 norm generated sharp images with few artefacts (tissue-classification errors, especially for bone and air differentiation).
      In Hemsley et al. [

      Hemsley M, Chugh B, Ruschin M, Lee Y, Tseng C-L, Stanisz G, et al. Deep Generative Model for Synthetic-CT Generation with Uncertainty Predictions. In: Martel AL, Abolmaesumi P, Stoyanov D, Mateus D, Zuluaga MA, Zhou SK, et al., editors. Medical Image Computing and Computer Assisted Intervention – MICCAI 2020, vol. 12261, Cham: Springer International Publishing; 2020, p. 834–44. 10.1007/978-3-030-59710-8_81.

      ], the L1 term in cGAN loss function [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ] is replaced by a term derived from the log-likelihood of the Laplace distribution to capture data dependent uncertainty.
      To overcome MR/CT registration issues, Kazemifar et al. [
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ,
      • Kazemifar S.
      • Barragán Montero A.M.
      • Souris K.
      • Rivas S.T.
      • Timmerman R.
      • Park Y.K.
      • et al.
      Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
      ] used a generator loss function based on the MI in cGAN. The MI loss allows the cGAN to use unregistered data to generate sCT and seems to accurately distinguish between air and bone regions.
      Instead of the usual cross-entropy LD loss in cGAN, Mao et al. [

      Mao X, Li Q, Xie H, Lau RYK, Wang Z, Smolley SP. Least Squares Generative Adversarial Networks. IEEE International Conference on Computer Vision 2017:9.

      ] recommend the quadratic version of the least square GAN. Olberg et al. [
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ] evaluated a Pix2Pix framework with two different generators: the conventional U-net and a proposed generator composed of stacked encoders and decoders separated by dilated convolutions applied to increase rates in parallel to encode large-scale features. The overall loss function was composed of adversarial (sigmoid cross-entropy) and MAE losses.
      Twelve studies used a Pix2Pix architecture [
      • Cusumano D.
      • Lenkowicz J.
      • Votta C.
      • Boldrini L.
      • Placidi L.
      • Catucci F.
      • et al.
      A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases.
      ,
      • Qi M.
      • Li Y.
      • Wu A.
      • Jia Q.
      • Li B.
      • Sun W.
      • et al.
      Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
      ,
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,
      • Brou Boni K.N.D.
      • Klein J.
      • Vanquin L.
      • Wagner A.
      • Lacornerie T.
      • Pasquier D.
      • et al.
      MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network.
      ,
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ,
      • Koike Y.
      • Akino Y.
      • Sumida I.
      • Shiomi H.
      • Mizuno H.
      • Yagi M.
      • et al.
      Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
      ,
      • Maspero M.
      • Savenije M.H.F.
      • Dinkla A.M.
      • Seevinck P.R.
      • Intven M.P.W.
      • Jurgenliemk-Schulz I.M.
      • et al.
      Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy.
      ,
      • Tie X.
      • Lam S.
      • Zhang Y.
      • Lee K.
      • Au K.
      • Cai J.
      Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients.
      ,

      Tang B, Wu F, Fu Y, Wang X, Wang P, Orlandini LC, et al. Dosimetric evaluation of synthetic CT image generated using a neural network for MR‐only brain radiotherapy. J Appl Clin Med Phys 2021:acm2.13176. 10.1002/acm2.13176.

      ,
      • Bourbonne V.
      • Jaouen V.
      • Hognon C.
      • Boussion N.
      • Lucia F.
      • Pradier O.
      • et al.
      Dosimetric Validation of a GAN-Based Pseudo-CT Generation for MRI-Only Stereotactic Brain Radiotherapy.
      ,

      Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

      ,
      • Sharma A.
      • Hamarneh G.
      Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.
      ]. Most of these Pix2Pix frameworks used only one MRI sequence as input and generated one sCT as output (called single-input single-output, SISO). A variant of Pix2Pix architecture proposed by Sharma et al. [
      • Sharma A.
      • Hamarneh G.
      Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.
      ] is multi-input and multiple-output (MIMO) combining information from all available MRI sequences and synthesizes the missing ones.
      One of the main advantages of cGANs is that the networks learn reasonable image-to-image translations even if the training dataset size is small. However, cGANs require coregistered MR-CT image pairs for training except with MI as loss function [
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ,
      • Kazemifar S.
      • Barragán Montero A.M.
      • Souris K.
      • Rivas S.T.
      • Timmerman R.
      • Park Y.K.
      • et al.
      Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors: Dosimetric evaluation of synthetic CT generated with GANs for MRI-only proton therapy treatment planning of brain tumors.
      ].
      • iv)
        Cycle-GAN
      For image-to-image translations between two modalities, the principles of the cycle-GAN are to extract characteristic features of both modalities and discover the underlying relationship between them [
      • Zhu J.-Y.
      • Park T.
      • Isola P.
      • Efros A.A.
      Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks.
      ]. The cycle-GAN involved two GANs: one to generate sCT from MRI and a second to generate synthetic-MRI (sMRI) from sCT (the output of the first GAN). These dual GANs learn simultaneously and a cyclic loss function minimizes the discrepancy between the original CT and the sCT obtained from the chained generators.
      Cycle GAN-based framework does not require paired MRI/CT images [

      Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv:170801155 [Cs] 2017.

      ,

      Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

      ]. Wolterink et al. [

      Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv:170801155 [Cs] 2017.

      ] found that training using unpaired images could, in some cases, outperform a GAN-model on paired images.
      Eleven studies used a cycle-GAN architecture to generate sCT from MRI [
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,

      Li W, Li Y, Qin W, Liang X, Xu J, Xiong J, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant Imaging Med Surg 2020;10:1223–36. 10.21037/qims-19-885.

      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Shafai-Erfani G.
      • Wang T.
      • Tian S.
      • et al.
      Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
      ,

      Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv:170801155 [Cs] 2017.

      ,
      • Peng Y.
      • Chen S.
      • Qin A.
      • Chen M.
      • Gao X.
      • Liu Y.
      • et al.
      Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning.
      ,

      Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

      ,

      Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

      ,
      • Liu Y.
      • Lei Y.
      • Wang T.
      • Kayode O.
      • Tian S.
      • Liu T.
      • et al.
      MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Wang T.
      • Ren L.
      • Lin L.
      • et al.
      MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
      ,

      Shafai-Erfani G, Lei Y, Liu Y, Wang Y, Wang T, Zhong J, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Particle Ther 2019;6:12–25. 10.14338/IJPT-19-00062.1.

      ]. The overall loss functions LG  intergrating the adversarial loss function LD and comparing the generated sCT and real CTs used in these cycle-GANs were:
      • the combination of adversarial loss (cross-entropy) and L1-norm [

        Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

        ,

        Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

        ],
      • the combination of the adversarial loss based on cross-entropy, the cycle consistency loss based on L1-norm, and the structural consistency loss based on L1-MIND [

        Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

        ] (the modality-independent neighborhood descriptor, MIND, introduced in [
        • Heinrich M.P.
        • Jenkinson M.
        • Bhushan M.
        • Matin T.
        • Gleeson Fergus V.
        • Brady S.M.
        • et al.
        MIND: Modality independent neighbourhood descriptor for multi-modal deformable registration.
        ]),
      • the combination of L2-norm, adversarial loss (binary cross-entropy), the gradient difference loss and cycle consistency loss (based on L1 norm) [

        Wolterink JM, Dinkla AM, Savenije MHF, Seevinck PR, Berg CAT van den, Isgum I. Deep MR to CT Synthesis using Unpaired Data. ArXiv:170801155 [Cs] 2017.

        ],
      • the combination of Lp-norm (mean P distance, MPD), adversarial loss and gradient loss [
        • Lei Y.
        • Harms J.
        • Wang T.
        • Liu Y.
        • Shu H.-K.
        • Jani A.B.
        • et al.
        MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
        ,

        Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

        ,
        • Liu Y.
        • Lei Y.
        • Wang Y.
        • Shafai-Erfani G.
        • Wang T.
        • Tian S.
        • et al.
        Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
        ,
        • Liu Y.
        • Lei Y.
        • Wang Y.
        • Wang T.
        • Ren L.
        • Lin L.
        • et al.
        MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
        ].
      Loss functions LD of the discriminator used in cycle-GAN are:
      • L2-norm (least squares loss) [
        • Emami H.
        • Dong M.
        • Nejad-Davarani S.P.
        • Glide-Hurst C.K.
        Generating synthetic CTs from magnetic resonance images using generative adversarial networks.
        ] as proposed in [

        Mao X, Li Q, Xie H, Lau RYK, Wang Z, Smolley SP. Least Squares Generative Adversarial Networks. IEEE International Conference on Computer Vision 2017:9.

        ,
        • Mao X.
        • Li Q.
        • Xie H.
        Multi-class Generative Adversarial Networks with the L2 Loss Function.
        ],
      • MAD (L1-norm) [
        • Lei Y.
        • Harms J.
        • Wang T.
        • Liu Y.
        • Shu H.-K.
        • Jani A.B.
        • et al.
        MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
        ,
        • Liu Y.
        • Lei Y.
        • Wang Y.
        • Shafai-Erfani G.
        • Wang T.
        • Tian S.
        • et al.
        Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
        ,

        Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

        ,
        • Liu Y.
        • Lei Y.
        • Wang Y.
        • Wang T.
        • Ren L.
        • Lin L.
        • et al.
        MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
        ],
      • Lp-norm (MPD) [

        Shafai-Erfani G, Lei Y, Liu Y, Wang Y, Wang T, Zhong J, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Particle Ther 2019;6:12–25. 10.14338/IJPT-19-00062.1.

        ].
      Since L2-based loss functions tend to generate blurry images and L1-based loss functions may introduce tissue-classification errors, some authors [
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Shafai-Erfani G.
      • Wang T.
      • Tian S.
      • et al.
      Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Wang T.
      • Ren L.
      • Lin L.
      • et al.
      MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
      ,

      Shafai-Erfani G, Lei Y, Liu Y, Wang Y, Wang T, Zhong J, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Particle Ther 2019;6:12–25. 10.14338/IJPT-19-00062.1.

      ] used an lp-norm (p = 1.5) distance, the MPD (Mean P distance). Using the MPD-based loss term, the authors also integrated an image gradient difference (GD) loss term (proposed in [

      Nie D, Trullo R, Lian J, Petitjean C, Ruan S, Wang Q, et al. Medical Image Synthesis with Context-Aware Generative Adversarial Networks. In: Descoteaux M, Maier-Hein L, Franz A, Jannin P, Collins DL, Duchesne S, editors. Medical Image Computing and Computer Assisted Intervention − MICCAI 2017, vol. 10435, Cham: Springer International Publishing; 2017, p. 417–25. 10.1007/978-3-319-66179-7_48.

      ]) into the loss function [
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Shafai-Erfani G.
      • Wang T.
      • Tian S.
      • et al.
      Evaluation of a deep learning-based pelvic synthetic CT generation technique for MRI-based prostate proton treatment planning.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Wang T.
      • Ren L.
      • Lin L.
      • et al.
      MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
      ,

      Shafai-Erfani G, Lei Y, Liu Y, Wang Y, Wang T, Zhong J, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Particle Ther 2019;6:12–25. 10.14338/IJPT-19-00062.1.

      ], to retain sharpness in synthetic images, which maintain zones with strong gradients, such as edges. Cycle-GAN-based methods use MSE loss as distance loss function, which often leads to blurring and over-smoothing.

      Data for sCT generation from MRI

      MRI/CT image preprocessing and post-processing

      In eighteen studies an MRI bias correction [
      • Largent A.
      • Barateau A.
      • Nunes J.-C.
      • Mylona E.
      • Castelli J.
      • Lafond C.
      • et al.
      Comparison of Deep Learning-Based and Patch-Based Methods for Pseudo-CT Generation in MRI-Based Prostate Dose Planning.
      ,
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ,
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,

      Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain pseudo Computed Tomography generated from deep learning for MRI-only radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 2020:S0360301620311305. 10.1016/j.ijrobp.2020.05.006.

      ,
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ,
      • Arabi H.
      • Dowling J.A.
      • Burgos N.
      • Han X.
      • Greer P.B.
      • Koutsouvelis N.
      • et al.
      Comparative study of algorithms for synthetic CT generation from MRI : Consequences for MRI -guided radiation planning in the pelvic region.
      ,
      • Dinkla A.M.
      • Florkow M.C.
      • Maspero M.
      • Savenije M.H.F.
      • Zijlstra F.
      • Doornaert P.A.H.
      • et al.
      Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network.
      ,
      • Fu J.
      • Yang Y.
      • Singhrao K.
      • Ruan D.
      • Chu F.-I.
      • Low D.A.
      • et al.
      Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging.
      ,

      Largent A, Marage L, Gicquiau I, Nunes J-C, Reynaert N, Castelli J, et al. Head-and-Neck MRI-only radiotherapy treatment planning: From acquisition in treatment position to pseudo-CT generation. Cancer/Radiothérapie 2020:S1278321820300615. 10.1016/j.canrad.2020.01.008.

      ,
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ,
      • Koike Y.
      • Akino Y.
      • Sumida I.
      • Shiomi H.
      • Mizuno H.
      • Yagi M.
      • et al.
      Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
      ,
      • Tie X.
      • Lam S.
      • Zhang Y.
      • Lee K.
      • Au K.
      • Cai J.
      Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients.
      ,

      Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

      ,

      Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

      ,
      • Liu Y.
      • Lei Y.
      • Wang T.
      • Kayode O.
      • Tian S.
      • Liu T.
      • et al.
      MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Wang T.
      • Ren L.
      • Lin L.
      • et al.
      MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
      ,

      Shafai-Erfani G, Lei Y, Liu Y, Wang Y, Wang T, Zhong J, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Particle Ther 2019;6:12–25. 10.14338/IJPT-19-00062.1.

      ] was reported. In [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ,
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ,
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ,
      • Arabi H.
      • Dowling J.A.
      • Burgos N.
      • Han X.
      • Greer P.B.
      • Koutsouvelis N.
      • et al.
      Comparative study of algorithms for synthetic CT generation from MRI : Consequences for MRI -guided radiation planning in the pelvic region.
      ], intensity inhomogeneity (or non-uniformity) correction was performed in all MR images using the N3 bias field correction algorithm [
      • Tustison N.J.
      • Avants B.B.
      • Cook P.A.
      • Zheng Y.
      • Egan A.
      • Yushkevich P.A.
      • et al.
      N4ITK: improved N3 bias correction.
      ,
      • Sled J.G.
      • Zijdenbos A.P.
      • Evans A.C.
      ] to correct the bias field before training or synthesis. In [

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,

      Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain pseudo Computed Tomography generated from deep learning for MRI-only radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 2020:S0360301620311305. 10.1016/j.ijrobp.2020.05.006.

      ,
      • Fu J.
      • Yang Y.
      • Singhrao K.
      • Ruan D.
      • Chu F.-I.
      • Low D.A.
      • et al.
      Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging.
      ,

      Largent A, Marage L, Gicquiau I, Nunes J-C, Reynaert N, Castelli J, et al. Head-and-Neck MRI-only radiotherapy treatment planning: From acquisition in treatment position to pseudo-CT generation. Cancer/Radiothérapie 2020:S1278321820300615. 10.1016/j.canrad.2020.01.008.

      ,
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ,
      • Koike Y.
      • Akino Y.
      • Sumida I.
      • Shiomi H.
      • Mizuno H.
      • Yagi M.
      • et al.
      Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
      ,
      • Tie X.
      • Lam S.
      • Zhang Y.
      • Lee K.
      • Au K.
      • Cai J.
      Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients.
      ,

      Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

      ,
      • Liu Y.
      • Lei Y.
      • Wang T.
      • Kayode O.
      • Tian S.
      • Liu T.
      • et al.
      MRI-based treatment planning for liver stereotactic body radiotherapy: validation of a deep learning-based synthetic CT generation method.
      ,
      • Liu Y.
      • Lei Y.
      • Wang Y.
      • Wang T.
      • Ren L.
      • Lin L.
      • et al.
      MRI-based treatment planning for proton radiotherapy: dosimetric validation of a deep learning-based liver synthetic CT generation method.
      ,

      Shafai-Erfani G, Lei Y, Liu Y, Wang Y, Wang T, Zhong J, et al. MRI-Based Proton Treatment Planning for Base of Skull Tumors. Int J Particle Ther 2019;6:12–25. 10.14338/IJPT-19-00062.1.

      ], the authors reported that the intensity inhomogeneity of the MRI was corrected using the N4 bias field correction algorithm.
      A 2D or 3D MRI geometry correction provided by the vendor was sometimes reported [
      • Dinkla A.M.
      • Florkow M.C.
      • Maspero M.
      • Savenije M.H.F.
      • Zijlstra F.
      • Doornaert P.A.H.
      • et al.
      Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network.
      ,

      Li W, Li Y, Qin W, Liang X, Xu J, Xiong J, et al. Magnetic resonance image (MRI) synthesis from brain computed tomography (CT) images based on deep learning methods for magnetic resonance (MR)-guided radiotherapy. Quant Imaging Med Surg 2020;10:1223–36. 10.21037/qims-19-885.

      ,
      • Dinkla A.M.
      • Wolterink J.M.
      • Maspero M.
      • Savenije M.H.F.
      • Verhoeff J.J.C.
      • Seravalli E.
      • et al.
      MR-Only Brain Radiation Therapy: Dosimetric Evaluation of Synthetic CTs Generated by a Dilated Convolutional Neural Network.
      ,

      Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

      ]. We can think that most of MR images had a geometry correction, but that it was not mentioned.
      In [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,
      • Fu J.
      • Yang Y.
      • Singhrao K.
      • Ruan D.
      • Chu F.-I.
      • Low D.A.
      • et al.
      Deep learning approaches using 2D and 3D convolutional neural networks for generating male pelvic synthetic computed tomography from magnetic resonance imaging.
      ,

      Largent A, Marage L, Gicquiau I, Nunes J-C, Reynaert N, Castelli J, et al. Head-and-Neck MRI-only radiotherapy treatment planning: From acquisition in treatment position to pseudo-CT generation. Cancer/Radiothérapie 2020:S1278321820300615. 10.1016/j.canrad.2020.01.008.

      ,
      • Tie X.
      • Lam S.
      • Zhang Y.
      • Lee K.
      • Au K.
      • Cai J.
      Pseudo-CT generation from multi-parametric MRI using a novel multi-channel multi-path conditional generative adversarial network for nasopharyngeal carcinoma patients.
      ], all MR images were normalized using a histogram-based intensity normalization [
      • Nyul L.G.
      • Udupa J.K.
      • Zhang X.
      New Variants of a Method of MRI Scale Standardization.
      ] to minimize the inter-patient MR intensity variation. Intensity normalization was also used in [
      • Xiang L.
      • Wang Q.
      • Nie D.
      • Zhang L.
      • Jin X.
      • Qiao Y.
      • et al.
      Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image.
      ,
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ]. In [
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ], all MR images were then histogram-matched to a randomly chosen template to help standardize image intensities across different patients using the method described by Cox et al. [
      • Cox I.
      • Roy S.
      • Hingorani S.L.
      Dynamic histogram warping of image pairs for constant image brightness.
      ]. All MR volumes were normalized by aligning the white matter peak identified by fuzzy C-means in [

      Yang H, Sun J, Carass A, Zhao C, Lee J, Xu Z, et al. Unpaired Brain MR-to-CT Synthesis Using a Structure-Constrained CycleGAN. In: Stoyanov D, Taylor Z, Carneiro G, Syeda-Mahmood T, Martel A, Maier-Hein L, et al., editors. Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support, Cham: Springer International Publishing; 2018, p. 174–82. 10.1007/978-3-030-00889-5_20.

      ]. In [
      • Dinkla A.M.
      • Florkow M.C.
      • Maspero M.
      • Savenije M.H.F.
      • Zijlstra F.
      • Doornaert P.A.H.
      • et al.
      Dosimetric evaluation of synthetic CT for head and neck radiotherapy generated by a patch-based three-dimensional convolutional neural network.
      ,

      Klages P, Bensilmane I, Riyahi S, Jiang J, Hunt M, Deasy JO, et al. Comparison of Patch-Based Conditional Generative Adversarial Neural Net Models with Emphasis on Model Robustness for Use in Head and Neck Cases for MR-Only Planning 2020:27. arXiv:1902.00536.

      ], histogram standardizations performed using vendor-provided software (CLEAR) were applied as provided by the vendor.
      In the study by Maspero et al. [
      • Maspero M.
      • Savenije M.H.F.
      • Dinkla A.M.
      • Seevinck P.R.
      • Intven M.P.W.
      • Jurgenliemk-Schulz I.M.
      • et al.
      Dose evaluation of fast synthetic-CT generation using a generative adversarial network for general pelvis MR-only radiotherapy.
      ], the voxel intensity of CT was clipped within the interval HU to avoid an excessively large discretization step and the MR images were normalized to their 95% intensity interval over the whole patient. All the images were converted to 8-bits to conform to the Pix2Pix implementation [
      • Isola P.
      • Zhu J.-Y.
      • Zhou T.
      • Efros A.A.
      ]. Before training, the air cavities were filled in CT images and bulk-assigned (−1000 HU) as located in MR images using an automated method.

      Training data characteristics

      Compared to 2D CNN, 3D CNN can better model 3D spatial information (neighborhood information) owing to the use of 3D convolution operations [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ] solving the discontinuity problem across slices, which are suffered by 2D CNN. However, the input type to DL models is mainly in 2D because fully 3D networks are much more difficult to train due to a large numbers of trainable parameters and requires exponentially more (GPU) memory and more data [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ,
      • Han X.
      MR-based synthetic CT generation using a deep convolutional neural network method.
      ]. With the 2.5 D approach, Dinkla et al. [
      • Dinkla A.M.
      • Wolterink J.M.
      • Maspero M.
      • Savenije M.H.F.
      • Verhoeff J.J.C.
      • Seravalli E.
      • et al.
      MR-Only Brain Radiation Therapy: Dosimetric Evaluation of Synthetic CTs Generated by a Dilated Convolutional Neural Network.
      ] added 3D contextual information while maintaining a manageable number of trainable parameters. Furthermore, discontinuities across slices present in 2D methods, were decreased. Besides, the 2.5D approaches [
      • Spadea M.F.
      • Pileggi G.
      • Zaffino P.
      • Salome P.
      • Catana C.
      • Izquierdo-Garcia D.
      • et al.
      Deep Convolution Neural Network (DCNN) Multiplane Approach to Synthetic CT Generation From MR images—Application in Brain Proton Therapy.
      ,
      • Dinkla A.M.
      • Wolterink J.M.
      • Maspero M.
      • Savenije M.H.F.
      • Verhoeff J.J.C.
      • Seravalli E.
      • et al.
      MR-Only Brain Radiation Therapy: Dosimetric Evaluation of Synthetic CTs Generated by a Dilated Convolutional Neural Network.
      ,
      • Thummerer A.
      • de Jong B.A.
      • Zaffino P.
      • Meijers A.
      • Marmitt G.G.
      • Seco J.
      • et al.
      Comparison of the suitability of CBCT- and MR-based synthetic CTs for daily adaptive proton therapy in head and neck patients.
      ] include average axial, sagittal, and coronal images as input to train the CNN. In 3D (patch-based) CNN [

      Nie D, Cao X, Gao Y, Wang L, Shen D. Estimating CT Image from MRI Data Using 3D Fully Convolutional Networks. In: Carneiro G, Mateus D, Peter L, Bradley A, Tavares JMRS, Belagiannis V, et al., editors. Deep Learning and Data Labeling for Medical Applications, vol. 10008, Cham: Springer International Publishing; 2016, p. 170–8. 10.1007/978-3-319-46976-8_18.

      ,
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.
      • Shu H.-K.
      • Jani A.B.
      • et al.
      MRI-only based synthetic CT generation using dense cycle consistent generative adversarial networks.
      ], an input MR image is first partitioned into overlapping patches. For each patch, the CNN is used to predict the corresponding CT patch and all predicted CT patches are merged into a single CT image by averaging the intensities of overlapping CT regions.
      Most of the reviewed studies used one MRI sequence as input and generated one sCT as output; an architecture generally called single-input single-output (SISO). Four studies used several MRI sequences as input to generate one sCT in output [
      • Qi M.
      • Li Y.
      • Wu A.
      • Jia Q.
      • Li B.
      • Sun W.
      • et al.
      Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
      ,

      Massa HA, Johnson JM, McMillan AB. Comparison of deep learning synthesis of synthetic CTs using clinical MRI inputs. Phys Med Biol 2020;65:23NT03. 10.1088/1361-6560/abc5cb.

      ,
      • Koike Y.
      • Akino Y.
      • Sumida I.
      • Shiomi H.
      • Mizuno H.
      • Yagi M.
      • et al.
      Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
      ,
      • Sharma A.
      • Hamarneh G.
      Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.
      ], these architectures referred to as multi-input single-output (MISO) [
      • Qi M.
      • Li Y.
      • Wu A.
      • Jia Q.
      • Li B.
      • Sun W.
      • et al.
      Multi-sequence MR image-based synthetic CT generation using a generative adversarial network for head and neck MRI-only radiotherapy.
      ,

      Massa HA, Johnson JM, McMillan AB. Comparison of deep learning synthesis of synthetic CTs using clinical MRI inputs. Phys Med Biol 2020;65:23NT03. 10.1088/1361-6560/abc5cb.

      ,
      • Koike Y.
      • Akino Y.
      • Sumida I.
      • Shiomi H.
      • Mizuno H.
      • Yagi M.
      • et al.
      Feasibility of synthetic computed tomography generated with an adversarial network for multi-sequence magnetic resonance-based brain radiotherapy.
      ,
      • Sharma A.
      • Hamarneh G.
      Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.
      ] or multi-input multiple-output (MIMO) [
      • Sharma A.
      • Hamarneh G.
      Missing MRI Pulse Sequence Synthesis Using Multi-Modal Generative Adversarial Network.
      ]. Moreover, most studies used training and evaluation data from one MRI device while eight studies used multi-device MRI. One study reported use of MRI data from different centers [
      • Maspero M.
      • Bentvelzen L.G.
      • Savenije M.H.F.
      • Guerreiro F.
      • Seravalli E.
      • Janssens G.O.
      • et al.
      Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy.
      ] and two studies [
      • Brou Boni K.N.D.
      • Klein J.
      • Vanquin L.
      • Wagner A.
      • Lacornerie T.
      • Pasquier D.
      • et al.
      MR to CT synthesis with multicenter data in the pelvic era using a conditional generative adversarial network.
      ,
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ] used data from the Gold Atlas Data set [
      • Nyholm T.
      • Svensson S.
      • Andersson S.
      • Jonsson J.
      • Sohlin M.
      • Gustafsson C.
      • et al.
      MR and CT data with multiobserver delineations of organs in the pelvic area—Part of the Gold Atlas project.
      ]. Five studies used low MR field (0.35 T) as input images [
      • Cusumano D.
      • Lenkowicz J.
      • Votta C.
      • Boldrini L.
      • Placidi L.
      • Catucci F.
      • et al.
      A deep learning approach to generate synthetic CT in low field MR-guided adaptive radiotherapy for abdominal and pelvic cases.
      ,

      Fu J, Singhrao K, Cao M, Yu V, Santhanam AP, Yang Y, et al. Generation of abdominal synthetic CTs from 0.35T MR images using generative adversarial networks for MR-only liver radiotherapy. Biomed Phys Eng Express 2020;6:015033. 10.1088/2057-1976/ab6e1f.

      ,
      • Olberg S.
      • Zhang H.
      • Kennedy W.R.
      • Chun J.
      • Rodriguez V.
      • Zoberi I.
      • et al.
      Synthetic CT reconstruction using a deep spatial pyramid convolutional framework for MR-only breast radiotherapy.
      ,

      Jeon W, An HJ, Kim J, Park JM, Kim H, Shin KH, et al. Preliminary Application of Synthetic Computed Tomography Image Generation from Magnetic Resonance Image Using Deep-Learning in Breast Cancer Patients. J Radiat Prot Res 2019;44:149–55. 10.14407/jrpr.2019.44.4.149.

      ,
      • Fetty L.
      • Löfstedt T.
      • Heilemann G.
      • Furtado H.
      • Nesvacil N.
      • Nyholm T.
      • et al.
      Investigating conditional GAN performance with different generator architectures, an ensemble model, and different MR scanners for MR-sCT conversion.
      ].

      Training and evaluation of data size

      The studies included in this review used several training strategies including k-fold cross-validation, single-fold validation, or leave-one-out. In k-fold cross-validation, the dataset is divided into k subsets, and the holdout method is repeated k times. Each time, one of the k subsets is used as the test set and the other k-1 subsets are combined to form a training set. The average error across all k trials is then computed. In single-fold validation, the dataset is separated into two sets, the training and testing sets. The leave-one out strategy consists on k-fold cross-validation taken to its logical extreme, with k equal to N, the number of data patients in the set.
      Data size is a fundamental challenge for DL approaches. There is no reported minimal or optimal data size for DL training. In the head area, four studies assessed sCT image quality as a function of the number of available images for training, from 15 to 242 patients for Alvares Andres et al. [

      Andres EA, Fidon L, Vakalopoulou M, Lerousseau M, Carré A, Sun R, et al. Dosimetry-driven quality measure of brain pseudo Computed Tomography generated from deep learning for MRI-only radiotherapy treatment planning. Int J Radiat Oncol Biol Phys 2020:S0360301620311305. 10.1016/j.ijrobp.2020.05.006.

      ], from 5 to 47 patients for Gupta [
      • Gupta D.
      • Kim M.
      • Vineberg K.A.
      • Balter J.M.
      Generation of Synthetic CT Images From MRI for Treatment Planning and Patient Positioning Using a 3-Channel U-Net Trained on Sagittal Images.
      ], from 34 to 135 patients for Peng et al. [
      • Peng Y.
      • Chen S.
      • Qin A.
      • Chen M.
      • Gao X.
      • Liu Y.
      • et al.
      Magnetic resonance-based synthetic computed tomography images generated using generative adversarial networks for nasopharyngeal carcinoma radiotherapy treatment planning.
      ], and from 1 to 40 patients for Maspero et al. [
      • Maspero M.
      • Bentvelzen L.G.
      • Savenije M.H.F.
      • Guerreiro F.
      • Seravalli E.
      • Janssens G.O.
      • et al.
      Deep learning-based synthetic CT generation for paediatric brain MR-only photon and proton radiotherapy.
      ]. Better image results were found for higher numbers of available images. A minimum of 10 patients seems to be needed since it has shown similar performance than a training of 20, 30 or 40 patients. One effective way to improve model robustness is to enhance the diversity of the training dataset. Data augmentation is essential to teach the network the desired invariance and robustness properties when only a few training samples are available. One common data augmentation technique [
      • Lei Y.
      • Harms J.
      • Wang T.
      • Liu Y.