Advertisement

Current applications of deep-learning in neuro-oncological MRI

  • C.M.L. Zegers
    Correspondence
    Corresponding author.
    Affiliations
    Department of Radiation Oncology (Maastro), Maastricht University Medical Center+, GROW School for Developmental Biology and Oncology, Maastricht, the Netherlands
    Search for articles by this author
  • J. Posch
    Affiliations
    Department of Radiation Oncology (Maastro), Maastricht University Medical Center+, GROW School for Developmental Biology and Oncology, Maastricht, the Netherlands
    Search for articles by this author
  • A. Traverso
    Affiliations
    Department of Radiation Oncology (Maastro), Maastricht University Medical Center+, GROW School for Developmental Biology and Oncology, Maastricht, the Netherlands
    Search for articles by this author
  • D. Eekers
    Affiliations
    Department of Radiation Oncology (Maastro), Maastricht University Medical Center+, GROW School for Developmental Biology and Oncology, Maastricht, the Netherlands
    Search for articles by this author
  • A.A. Postma
    Affiliations
    Department of Radiology & Nuclear Medicine, Maastricht University Medical Center, MHeNs School for Mental Health and Neuroscience, Maastricht, the Netherlands
    Search for articles by this author
  • W. Backes
    Affiliations
    Department of Radiology & Nuclear Medicine, Maastricht University Medical Center, MHeNs School for Mental Health and Neuroscience, Maastricht, the Netherlands
    Search for articles by this author
  • A. Dekker
    Affiliations
    Department of Radiation Oncology (Maastro), Maastricht University Medical Center+, GROW School for Developmental Biology and Oncology, Maastricht, the Netherlands
    Search for articles by this author
  • W. van Elmpt
    Affiliations
    Department of Radiation Oncology (Maastro), Maastricht University Medical Center+, GROW School for Developmental Biology and Oncology, Maastricht, the Netherlands
    Search for articles by this author
Open AccessPublished:March 26, 2021DOI:https://doi.org/10.1016/j.ejmp.2021.03.003

      Highlights

      • Deep learning (DL) has the potential to enhance processing and interpretation of MRI.
      • This review gives an overview of the use of DL in MRI for neuro-oncology.
      • DL applications can improve MRI technological innovation, diagnosis and follow-up.

      Abstract

      Purpose

      Magnetic Resonance Imaging (MRI) provides an essential contribution in the screening, detection, diagnosis, staging, treatment and follow-up in patients with a neurological neoplasm. Deep learning (DL), a subdomain of artificial intelligence has the potential to enhance the characterization, processing and interpretation of MRI images. The aim of this review paper is to give an overview of the current state-of-art usage of DL in MRI for neuro-oncology.

      Methods

      We reviewed the Pubmed database by applying a specific search strategy including the combination of MRI, DL, neuro-oncology and its corresponding search terminologies, by focussing on Medical Subject Headings (Mesh) or title/abstract appearance. The original research papers were classified based on its application, into three categories: technological innovation, diagnosis and follow-up.

      Results

      Forty-one publications were eligible for review, all were published after the year 2016. The majority (N = 22) was assigned to technological innovation, twelve had a focus on diagnosis and seven were related to patient follow-up. Applications ranged from improving the acquisition, synthetic CT generation, auto-segmentation, tumor classification, outcome prediction and response assessment. The majority of publications made use of standard (T1w, cT1w, T2w and FLAIR imaging), with only a few exceptions using more advanced MRI technologies. The majority of studies used a variation on convolution neural network (CNN) architectures.

      Conclusion

      Deep learning in MRI for neuro-oncology is a novel field of research; it has potential in a broad range of applications. Remaining challenges include the accessibility of large imaging datasets, the applicability across institutes/vendors and the validation and implementation of these technologies in clinical practise.

      Keywords

      Introduction

      The field of Artificial Intelligence (AI) is evolving at a rapid speed. The exponential growth of computational algorithms, like artificial intelligence methods are expected to improve diagnosis, therapy and follow-up in medicine [

      J. D. Rudie, A. M. Rauschecker, R. N. Bryan, C. Davatzikos, and S. Mohan, Emerging Applications of Artificial Intelligence in Neuro-Oncology, Radiology, vol. 290, no. 3, Art. no. 3, Mar. 2019, doi: 10.1148/radiol.2018181928.

      ]. Especially imaging related studies in health care are emerging in the subdomain of AI called deep-learning (DL) [
      • Sahiner B.
      • Pezeshk A.
      • Hadjiiski L.M.
      • Wang X.
      • Drukker K.
      • Cha K.H.
      • et al.
      Deep learning in medical imaging and radiation therapy.
      ,

      Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no. 7553, Art. no. 7553, May 2015, doi: 10.1038/nature14539.

      ]. In contrast to traditional machine learning (ML), where careful engineering is necessary to define and extract elements (features) to detect or classify patterns in the image, deep-learning allows the use of raw imaging data and can automatically discover the representations needed for detection or classification [

      Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no. 7553, Art. no. 7553, May 2015, doi: 10.1038/nature14539.

      ]. Deep-learning based AI technology therefore provides unprecedented enhancements in terms of (automated) image analysis in many fields of medicine [

      S. M. McKinney et al., International evaluation of an AI system for breast cancer screening, Nature, vol. 577, no. 7788, Art. no. 7788, Jan. 2020, doi: 10.1038/s41586-019-1799-6.

      ]. For oncological investigations, typical applications are in the area of diagnosis and staging of cancer, treatment decision and individual treatment optimization including prognosis modelling, and follow-up imaging. Many imaging modalities have opportunities to aid in the care path of cancer patients. In this review we will focus on the most frequently used imaging method applied to neuro-oncology, namely magnetic resonance imaging (MRI). Given the large amount of data currently generated on MRI scanners applying different image acquisition sequences and post-processing steps, deep-learning technology is ideally suited for analysis of these large scale, multi-dimensionalimage sets. MRI and (automated) analysis of MRI data is one of the cornerstones for these previous mentioned applications in the neuro-oncology domain [
      • Kickingereder P.
      • Isensee F.
      • Tursunova I.
      • Petersen J.
      • Neuberger U.
      • Bonekamp D.
      • et al.
      Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study.
      ].
      Data scientist have an increasing role in the image analysis and interpretation of advanced MRI images, due to the generation of a large amount of data. For example, diffusion tensor imaging (DTI) is used to measure the directionally of proton motion, which is often altered in the presence of brain tumors [
      • Le Bihan D.
      • Mangin J.-F.
      • Poupon C.
      • Clark C.A.
      • Pappata S.
      • Molko N.
      • et al.
      Diffusion tensor imaging: concepts and applications.
      ]. For this application physicists, radiologists and computer scientists needed to collaborate to extract the maximum amount of information captured in these imaging sequences. With the increase in computing power and advanced programming algorithms, the use of deep-learning algorithms to extract relevant information from MRI imaging is expected to increase even further.
      The aim of this review paper is to give an overview of the current state-of-art of applications of deep-learning in MRI with a specific focus on neuro-oncology. For this investigation, we categorized the available literature into various domains following the clinical care path from technology to diagnosis and follow-up.

      Methods

      Search strategy

      To provide an overview of the available literature combining Deep Learning and MRI in neuro-oncology we used the Pubmed database and defined a specific search strategy including a combination of MRI, DL, neuro-oncology and corresponding search terminologies. We applied a search terminology by focussing on Mesh terms or title/abstract appearance. The specific search string was:
      “Magnetic Resonance Imaging”[Mesh] AND “Deep Learning”[Mesh] AND (Neuro-oncology[tiab] OR “Brain Neoplasms”[Mesh] OR “Central Nervous System Neoplasms”[Mesh] OR “Neoplasms, Neuroepithelial”[Mesh] OR “Meningeal Neoplasms”[Mesh])

      Data extraction

      From the original publications, we extracted title, first author, journal, year of publication, study type, goal of study, patient population, sample size, DL technology and MRI technology used. In addition, we classified the original research papers into three categories: 1) technological innovations 2) diagnosis and 3) follow-up.

      Results

      The search strategy resulted in 45 publications (date: 1st November 2020). We excluded publications written in Chinese [N = 1], reviews or editorial publications [N = 4], publications not specifically focussing on DL applications using MRI images [N = 1] and publications without an available full text [N = 1]. From reference searching in these papers, we identified three additional publications, which were included in the scope of this review (Fig. 1). A total of 41 publications were reviewed. All publications were recently published, after the year 2016 indicating the recent and timely evolution of this field (Fig. 2).
      Figure thumbnail gr2
      Fig. 2Number of publications per year, per category. *The year 2020 is incomplete, since the reference search was performed on 1st November 2020.

      Technological innovations

      We defined the category technological innovations for research with a focus on improving the image acquisition, image analysis or automation in the treatment process. 22 of the 41 publications were assigned to ‘technological innovations’ (Table 1), with a large variation in approaches. This ranged from tasks regarding the reduction of contrast agents in the image acquisition [
      • Gong E.
      • Pauly J.M.
      • Wintermark M.
      • Zaharchuk G.
      Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI.
      ], the filtering of artefacts and spectral fitting in MRSI [
      • Gurbani S.S.
      • et al.
      A convolutional neural network to filter artifacts in spectroscopic MRI.
      ,
      • Gurbani S.S.
      • Sheriff S.
      • Maudsley A.A.
      • Shim H.
      • Cooper L.A.D.
      Incorporation of a spectral model in a convolutional neural network for accelerated spectral fitting.
      ], the separation of brain from non-brain tissue [

      F. Isensee et al., Automated brain extraction of multisequence MRI using artificial neural networks, Hum Brain Mapp, vol. 40, no. 17, pp. 4952–4964, 01 2019, doi: 10.1002/hbm.24750.

      ], the generation of synthetic Computed Tomography (CT) images for radiotherapy treatment [
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ,
      • Neppl S.
      • Landry G.
      • Kurz C.
      • Hansen D.C.
      • Hoyle B.
      • Stöcklein S.
      • et al.
      Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans.
      ,
      • Liu F.
      • Yadav P.
      • Baschnagel A.M.
      • McMillan A.B.
      MR-based treatment planning in radiation therapy using a deep learning approach.
      ,
      • Dinkla A.M.
      • Wolterink J.M.
      • Maspero M.
      • Savenije M.H.F.
      • Verhoeff J.J.C.
      • Seravalli E.
      • et al.
      MR-only brain radiation therapy: dosimetric evaluation of synthetic CTs generated by a dilated convolutional neural network.
      ], the auto-segmentation tasks of brain tumors [
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      An efficient implementation of deep convolutional neural networks for MRI segmentation.
      ,
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation.
      ,
      • Iqbal S.
      • Ghani Khan M.U.
      • Saba T.
      • Mehmood Z.
      • Javaid N.
      • Rehman A.
      • et al.
      Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.
      ,
      • Wang G.
      • Li W.
      • Zuluaga M.A.
      • Pratt R.
      • Patel P.A.
      • Aertsen M.
      • et al.
      Interactive medical image segmentation using deep learning with image-specific fine tuning.
      ,
      • Li H.
      • Li A.
      • Wang M.
      A novel end-to-end brain tumor segmentation method using improved fully convolutional networks.
      ,
      • Sun J.
      • Chen W.
      • Peng S.
      • Liu B.
      DRRNet: Dense residual refine networks for automatic brain tumor segmentation.
      ,
      • Deng W.
      • Shi Q.
      • Luo K.
      • Yang Y.
      • Ning N.
      Brain tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature.
      ,
      • Pereira S.
      • Pinto A.
      • Amorim J.
      • Ribeiro A.
      • Alves V.
      • Silva C.A.
      Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks.
      ,
      • Perkuhn M.
      • Stavrinou P.
      • Thiele F.
      • Shakirin G.
      • Mohan M.
      • Garmpis D.
      • et al.
      Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine.
      ,
      • Tang F.
      • Liang S.
      • Zhong T.
      • Huang X.
      • Deng X.
      • Zhang Y.u.
      • et al.
      Postoperative glioma segmentation in CT image using deep feature fusion model guided by multi-sequence MRIs.
      ,
      • Geetha A.
      • Gomathi N.
      A robust grey wolf-based deep learning for brain tumour detection in MR images.
      ,
      • Thillaikkarasi R.
      • Saravanan S.
      An enhancement of deep learning algorithm for brain tumor segmentation using kernel based CNN with M-SVM.
      ] and solutions to generate, or cope with, limited annotated data [
      • Thillaikkarasi R.
      • Saravanan S.
      An enhancement of deep learning algorithm for brain tumor segmentation using kernel based CNN with M-SVM.
      ,
      • Stember J.N.
      • Celik H.
      • Krupinski E.
      • Chang P.D.
      • Mutasa S.
      • Wood B.J.
      • et al.
      Eye tracking for deep learning segmentation using convolutional neural networks.
      ]. The patient populations was diverse, containing healthy subjects, not specified brain tumors, as well as glioma, glioblastoma, meningeomas and brain metastasis. Sample sizes ranged from 10 MRI scans [
      • Gurbani S.S.
      • Sheriff S.
      • Maudsley A.A.
      • Shim H.
      • Cooper L.A.D.
      Incorporation of a spectral model in a convolutional neural network for accelerated spectral fitting.
      ] to 1107 patients with 2925 MRI scans [

      F. Isensee et al., Automated brain extraction of multisequence MRI using artificial neural networks, Hum Brain Mapp, vol. 40, no. 17, pp. 4952–4964, 01 2019, doi: 10.1002/hbm.24750.

      ]. The most frequently used MRI sequences were T1 weighted (T1-w) or contrast enhanced T1-w MRI (cT1-w), followed by a frequent addition of T2 weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR). The use of Magnetic resonance spectroscopic imaging (MRSI) was only presented in two publications (see Table 1).
      Table 1The publications associated with the category ‘technological innovation’.
      TitleFirst AuthorJournalYearStudy typeGoal of studypatient populationsample sizeAI technologyMRI technology
      Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI.Gong et al.J Magn Reson Imaging2018ProspectiveTo reduce gadolinium dose in contrast-enhanced brain MRImixed & glioma60 patientsEncoder-decoder CNN with bypass connections and residual connectionsT1-w IR-FSPGR
      MR-based treatment planning in radiation therapy using a deep learning approach.Liu et al.J Appl Clin Med Phys.2019RetrospectiveTo develop and evaluate the feasibility of DL approaches for MR‐based radiation treatment planningstroke patients & brain mets50 patientsCNN; deepconvolutional encoder‐decoder networkcT1-w
      Clinical Evaluation of a Multiparametric Deep Learning Model for Glioblastoma Segmentation Using Heterogeneous Magnetic Resonance Imaging Data From Clinical Routine.Perkuhn et al.J. Invest Radiol.2018RetrospectiveTo evaluate an automatic GBM tumor segmentation algorithm on data from multiple centersglioblastoma64 patientsDL model based on DeepMedic, a multilayer, multiscale convolutional neural networkcT1-w, T1-w, T2-w, FLAIR
      Postoperative glioma segmentation in CT image using deep feature fusion model guided by multi-sequence MRIs.Tang et al.Eur Radiol.2020RetrospectiveTo develop a deep feature fusion model (DFFM) guided by multi-sequence MRIs for postoperative glioma segmentationpostoperative gliomas59 patientsMulti-channel CNN architecturecT1-w, T1-w, T2-w, FLAIR
      Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.Iqbal et al.Microsc Res Tech.2019retrospectivePresent DL models using long short term memory (LSTM) and CNN (ConvNet) for accurate brain tumor delineationglioma384 patientsLong Short Term Memory (LSTM) and CNN (ConvNet)cT1-w, T1-w, T2-w, FLAIR
      An Efficient Implementation of Deep Convolutional Neural Networks for MRI Segmentation.Hoseini et al.J Digit Imaging.2018RetrospectiveTo segment brain tumors in MRI using DL.brain tumors230 brain imagesHigh-capacity Deep CNN containing > one layer. The DCNN contains two parts: architecture and learning algorithms.cT1-w, T1-w, T2-w, FLAIR
      An Enhancement of Deep Learning Algorithm for Brain Tumor Segmentation Using Kernel Based CNN with M−SVM.Thillaikkarasi et al.J Med Syst.2019retrospectiveTo present a novel deep learning algorithm (kernel based CNN) with M−SVM to segment the tumor automatically and efficiently.not mentioned40 patientsImage Classification using M−SVM classifier & Tumor segmentation using DL algorithmnot mentioned
      AdaptAhead Optimization Algorithm for Learning Deep CNN Applied to MRI Segmentation.Hoseini et al.J Digit Imaging.2019DescriptiveDevelopment of AdaptAhead optimization algorithm for learning DCNN with robust architecture in relation to the high volume data.glioma230 brain imagesProposed optimization algorithm for learning DCNN based on a combination of Nesterov and RMSProp techniques(AdaptAhead).cT1-w, T1-w, T2-w, FLAIR
      Interactive Medical Image Segmentation Using Deep Learning With Image-Specific Fine Tuning.Wang et al.IEEE Trans Med Imaging2018Decriptive3-D segmentation of brain tumor core and whole brain tumor from different MR sequencesglioma274 scans from 198 patientsDL-based interactive segmentation framework by incorporating CNNs into a bounding boxcT1-w, FLAIR, T2-w
      A novel end-to-end brain tumor segmentation method using improved fully convolutional networks.Li et al.Comput Biol Med.2019DescriptiveTo develop a novel end-to-end brain tumor segmentation method using an improved fully CNN by modifying the U-Net architectureglioma274 scans from 198 patientsAn improved fully CNN by modifying the U-Net architecturecT1-w, T1-w, T2-w, FLAIR
      Eye Tracking for Deep Learning Segmentation Using Convolutional Neural Networks.Stember et al.J Digit Imaging2019retrospectiveTo show that segmentation masks generated with the help of eye tracking are similar to those rendered by hand annotation.meningeoma, normal brain444 scansCNNcT1-w
      DRRNet: Dense Residual Refine Networks for Automatic Brain Tumor Segmentation.Sun et al.J Med Syst.2019DecriptiveTo propose a novel automatic 3D CNN-based method for brain tumor segmentation.glioma274 scansDensely connected 3D CNNbased model, DRRNetcT1-w, T1-w, T2-w, FLAIR
      A convolutional neural network to filter artifacts in spectroscopic MRI.Gurbani et al.Magn Reson Med.2018DescriptiveA DL model was developed that was capable of identifying and filtering out poor quality spectra.glioblastomaNACNNMRSI
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.Kazemifar S et al.Radiother Oncol.2019RetrospectiveThis study assessed the dosimetric accuracy of synthetic CT images generated from magnetic resonance imaging (MRI) data for focal brain radiation therapy, using a DL approach.brain tumors77 patientsgenerative adversarial network (GAN)cT1-w
      Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans.Neppl S et al.Acta Oncol2019RetrospectiveComparison of generated pseudoCTs with a U-shaped CNN for 2D image slices (Unet2D) and a Ushaped CNN for 3D image stacks (Unet3D) from MRI.head89 scans2D and a 3D U-shaped convolutional neural network (Unet).T1-w
      Building medical image classifiers with very limited data using segmentation networks.Wong et al.Med Image Anal2018DescriptiveA strategy for building medical image classifiers from pre-trained segmentation networks.no tumor, low grade glioma, glioblastoma323 scansModified M−Net: the no of feature channels of each convolutional layer evolves with max pooling and upsampling.T1-w, MP-RAGE, SPGR
      Brain Tumor Segmentation Based on Improved Convolutional Neural Network in Combination with Non-quantifiable Local Texture Feature.Deng et al.J Med Syst2019DescriptiveNovel brain tumor segmentation method by integrating fully CNN and dense micro-block difference feature (DMDF) into a unified framework.glioma100 patientsFully CNN CNN(FCNN) and dense micro-block difference feature (DMDF)cT1-w, T1-w, T2-w, FLAIR
      Incorporation of a spectral model in a convolutional neural network for accelerated spectral fitting.Gurbani et al.Magn Reson Med.2019DescriptiveA novel deep learning architecture that combines a CNN with a priori models of the spectrum.glioblastoma10 scansCNN with a priori models of the spectrumMRSI
      Adaptive Feature Recombination and Recalibration for Semantic Segmentation With Fully Convolutional Networks.Pereira et al.IEEE Trans Med Imaging2019DescriptiveThe recombination of features and a spatially adaptive recalibration block that is adapted for semantic segmentation with Fully CNN — the SegSE block.brain tumors396 scansFully Convolutional Networks — the SegSE block.cT1-w, T1-w, T2-w, FLAIR
      A robust grey wolf-based deep learning for brain tumour detection in MR images.Geetha et al.Biomed Tech2020DescriptiveThis article proposes a new accurate brain tumor detection modelglioma58 patientsDeep belief network (DBN) for classification for which grey wolf optimisation (GWO) is used. The proposed model is termed the GW-DBN model.cT1-w, T1-w, T2-w, FLAIR
      Automated brain extraction of multisequence MRI using artificial neural networks.Isensee et al.Hum Brain Mapp.2019RetrospectiveTo train and independently validate an ANN for brain extractionglioblastoma, healthy subjects, patients with psychiatric symptoms1107 patients; 2925 MRIArtificial Neural Networks (ANN)T1-w, cT1-w, FLAIR, T2-w
      MR-Only Brain Radiation Therapy: Dosimetric Evaluation of Synthetic CTs Generated by a Dilated Convolutional Neural NetworkDinkla et al.Int J Radiat Oncol Biol Phys2018RetrospectiveEvaluate whether synthetic CT images generated with a dilated CNN enable accurate MR-based dose calculations in the brainBrain tumors52 patientsDilated CNNT1w

      Image acquisition and pre-processing

      During image acquisition, MRI contrast agents can be used to enhance the visibility of pathology on images. Gadolinium-based contrast is the most frequently used in clinical practice and has vital importance in neuro-oncological MRI (e.g. T1-w, T2-w and FLAIR sequences). There is, however convincing evidence that a deposition of gadolinium in the deep nuclei of the brain can occur, especially after repeated exposure to Gadolinium-based contrast. At the moment, the clinical or biological significance is still unknown, nevertheless the International Society of Magnetic Resonance in Medicine (ISMRM) urges caution in the use of gadolinium and a reduction of the frequency and amount of contrast agent is preferred [
      • Gulani V.
      • Calamante F.
      • Shellock F.G.
      • Kanal E.
      • Reeder S.B.
      Gadolinium deposition in the brain: summary of evidence and recommendations.
      ]. A deep-learning framework was used to generate full-dose gadolinium images from low-dose, using only 10% of the gadolinium dose, images. The study shown that the DL method showed no significant differences with regard to overall image quality, clarity of the contrast enhancement or artifact suppression and therefore has the potential to reduce the gadolinium contrast agent while preserving the image quality, [
      • Gong E.
      • Pauly J.M.
      • Wintermark M.
      • Zaharchuk G.
      Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI.
      ].
      Specifically in magnetic resonance spectroscopic imaging (MRSI), the removal of spectral artefacts is an essential pre-processing step. Deep learning models have been developed to identify and filter these poor quality spectral data, that otherwise could lead to incorrect classification of voxel pathology. Gurbani et al. [
      • Gurbani S.S.
      • et al.
      A convolutional neural network to filter artifacts in spectroscopic MRI.
      ] trained a CNN to analyze the MRSI frequency domain spectra to detect artifacts, and compared the performance of the model to experts and achieved a high sensitivity and specificity with an AUC of 0.95. In addition, the same authors used a convolution neural network to process the MRSI data and perform rapid spectral fitting, were they were able to perform a sub-minute calculation of the relative metabolite concentration of the brain [
      • Gurbani S.S.
      • Sheriff S.
      • Maudsley A.A.
      • Shim H.
      • Cooper L.A.D.
      Incorporation of a spectral model in a convolutional neural network for accelerated spectral fitting.
      ].
      In the processing of images for neuroimaging studies, the identification of the brain tissue is an important pre-processing step. The accuracy can have impact on the quality of further image analysis like image registration, segmentation of tumor lesions, measurement of brain volume, cortical thickness and planning for interventions [

      F. Isensee et al., Automated brain extraction of multisequence MRI using artificial neural networks, Hum Brain Mapp, vol. 40, no. 17, pp. 4952–4964, 01 2019, doi: 10.1002/hbm.24750.

      ,
      • Klein A.
      • Ghosh S.S.
      • Avants B.
      • Yeo B.T.T.
      • Fischl B.
      • Ardekani B.
      • et al.
      Evaluation of volume-based and surface-based brain image registration methods.
      ,
      • de Boer R.
      • Vrooman H.A.
      • Ikram M.A.
      • Vernooij M.W.
      • Breteler M.M.B.
      • van der Lugt A.
      • et al.
      Accuracy and reproducibility study of automatic MRI brain tissue segmentation methods.
      ]. Challenges in brain segmentation are the labour and time intensiveness of manual segmentation and when segmentation is automated the diversity in MRI pulse sequences, MRI vendors and neurological pathologies, which can impact the automatic segmentation. Isensee et al. [

      F. Isensee et al., Automated brain extraction of multisequence MRI using artificial neural networks, Hum Brain Mapp, vol. 40, no. 17, pp. 4952–4964, 01 2019, doi: 10.1002/hbm.24750.

      ] trained and independently validated an artificial neural network (ANN), for brain identification on four different datasets including a large dataset from a prospective randomized neuro-oncology trial (EORTC-26101) and three independent public datasets [
      • Shattuck D.W.
      • Mirza M.
      • Adisetiyo V.
      • Hojatkashani C.
      • Salamon G.
      • Narr K.L.
      • et al.
      Construction of a 3D probabilistic atlas of human cortical structures.
      ,
      • Puccio B.
      • Pooley J.P.
      • Pellman J.S.
      • Taverna E.C.
      • Craddock R.C.
      The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data.
      ,
      • Souza R.
      • Lucena O.
      • Garrafa J.
      • Gobbi D.
      • Saluzzi M.
      • Appenzeller S.
      • et al.
      An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement.
      ]. The ANN algorithm outperformed six public brain identification methods, by the comparison of comparing DICE coefficient and Hausdorff distances and enabled a robust brain identification in the presence of pathology. The brain extraction algorithm was applicable to a broad range of MRI sequence types (T1-w, cT1-2, T2-w and FLAIR) and was not influenced by MRI acquisition parameters or hardware.

      Synthetic CT generation

      For radiotherapy treatment planning, CT scans are used to determine the electron or mass density of the tissues. This density is necessary to estimate the absorbed dose of the radiotherapy treatment. In neuro-oncology the use of MRI images provides significant additional value regarding soft tissue contrast in comparison to CT imaging. Therefore, in current clinical practice, most patients receive both MRI and CT imaging in the workup for radiotherapy treatment (Fig. 3). The radiotherapy domain is however moving towards the use of MRI only treatments, by generating a synthetic CT from MRI images. A challenge in this use case is the accurate separation of air and bone, which is essential to limit the dose calculation error. Kazemifar et al. [
      • Kazemifar S.
      • McGuire S.
      • Timmerman R.
      • Wardak Z.
      • Nguyen D.
      • Park Y.
      • et al.
      MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
      ] used a generative adversarial network to generate synthetic CT images from cT1-w MRI images and assessed the dosimetric accuracy. Overall, they observed no significant difference in dose parameters evaluated for the target volume and healthy structures. In addition, Neppl et al. [
      • Neppl S.
      • Landry G.
      • Kurz C.
      • Hansen D.C.
      • Hoyle B.
      • Stöcklein S.
      • et al.
      Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans.
      ] evaluated the use of a 2D and 3D U-shaped Convolutional Neural Network (CNN) to generate synthetic CT images from T1-w MRI and assessed the effect of photon and proton dose distribution. The dose evaluation for photons yielded a good pass rate for both 2D and 3D U-Nets, while the gamma passing rate (2%, 2 mm) for the proton plans were all above 89.3%. Liu et al. [
      • Liu F.
      • Yadav P.
      • Baschnagel A.M.
      • McMillan A.B.
      MR-based treatment planning in radiation therapy using a deep learning approach.
      ] utilized the deep convolutional encoder‐decoder network to develop an automated approach to generate pseudo CT images from T1-w MR images and observed no significant difference in dose-distribution compared to the use of standard kvCT imaging. Last, Dinkla et al. [
      • Dinkla A.M.
      • Wolterink J.M.
      • Maspero M.
      • Savenije M.H.F.
      • Verhoeff J.J.C.
      • Seravalli E.
      • et al.
      MR-only brain radiation therapy: dosimetric evaluation of synthetic CTs generated by a dilated convolutional neural network.
      ] showed that dose calculations performed on synthetic CT images generated by a dilated CNN, by using an additional dilation parameter to the standard convolutional kernels, were accurate and can be used for MRI-only intracranial radiation therapy treatment planning.
      Figure thumbnail gr3
      Fig. 3Example of a patient with an anaplastic oligodendroglioma WHO-grade 3 in the right parietal lobe. Shown are the T1-w image after gadolinium contrast, the FLAIR MRI and the CT used for radiotherapy planning purposes. In blue is the annotated gross tumor volume by experienced radiation oncologist. The segmentation of the tumor lesion and the generation of a synthetic CT are steps that could potentially be automated using DL technology. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)

      Auto-segmentation

      For characterization of individual brain structures, the segmentation of relevant structures (both tumor and functional regions of interest) within the brain is an important process for many applications including radiation treatment planning (Fig. 3). The outlining of regions of interest is a time-intensive procedure and it is known to be prone to inter-observer variability. [
      • Visser M.
      • Müller D.M.J.
      • van Duijn R.J.M.
      • Smits M.
      • Verburg N.
      • Hendriks E.J.
      • et al.
      Inter-rater agreement in glioma segmentations on longitudinal MRI.
      ,
      • Bartel F.
      • van Herk M.
      • Vrenken H.
      • Vandaele F.
      • Sunaert S.
      • de Jaeger K.
      • et al.
      Inter-observer variation of hippocampus delineation in hippocampal avoidance prophylactic cranial irradiation.
      ,

      D. B. Eekers et al., The EPTN consensus-based atlas for CT- and MR-based contouring in neuro-oncology, Radiother Oncol, vol. 128, no. 1, pp. 37–43, 2018, doi: 10.1016/j.radonc.2017.12.013.

      ] The use of automated segmentation has therefore the potential to improve the delineation quality and treatment workflow.
      As part of the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) a yearly challenge has been set for Brain Tumor Segmentation Challenge (BRATS) from 2012 to 2020, which focusses on the evaluation state-of-the-art methods for the segmentation of brain tumors in multimodal magnetic resonance imaging (MRI) scans. The available dataset is modified for each yearly challenge. In our search 8 publications used data made available from this BRATS MICCAI challenge from several years and propose a diversity of DL methodsto improve tumor segmentation [
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      An efficient implementation of deep convolutional neural networks for MRI segmentation.
      ,
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation.
      ,
      • Iqbal S.
      • Ghani Khan M.U.
      • Saba T.
      • Mehmood Z.
      • Javaid N.
      • Rehman A.
      • et al.
      Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.
      ,
      • Wang G.
      • Li W.
      • Zuluaga M.A.
      • Pratt R.
      • Patel P.A.
      • Aertsen M.
      • et al.
      Interactive medical image segmentation using deep learning with image-specific fine tuning.
      ,
      • Li H.
      • Li A.
      • Wang M.
      A novel end-to-end brain tumor segmentation method using improved fully convolutional networks.
      ,
      • Sun J.
      • Chen W.
      • Peng S.
      • Liu B.
      DRRNet: Dense residual refine networks for automatic brain tumor segmentation.
      ,
      • Deng W.
      • Shi Q.
      • Luo K.
      • Yang Y.
      • Ning N.
      Brain tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature.
      ,
      • Pereira S.
      • Pinto A.
      • Amorim J.
      • Ribeiro A.
      • Alves V.
      • Silva C.A.
      Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks.
      ].
      All studies using BRATS data report at least the DICE similarity coefficient for their proposed methods, which ranged for the whole tumor from 0.84 to 0.91, for the tumor core 0.72–0.86 and for the enhanced from 0.62 − 0.82 (Fig. 4). In detail, on the MICCAI 2015 dataset; Iqbal et al [
      • Iqbal S.
      • Ghani Khan M.U.
      • Saba T.
      • Mehmood Z.
      • Javaid N.
      • Rehman A.
      • et al.
      Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.
      ] evaluated two deep learning models, long short term memory (LSTM) and CNN (ConvNet) and reported DICE for the whole tumor of 0.84 for the ensemble LSTM and ConvNet. Wang et al. [
      • Wang G.
      • Li W.
      • Zuluaga M.A.
      • Pratt R.
      • Patel P.A.
      • Aertsen M.
      • et al.
      Interactive medical image segmentation using deep learning with image-specific fine tuning.
      ] developed an interactive framework with Image-specific fine-tuning-based Segmentation (BIFSeg), and reported for the same dataset DICE similarity coefficients for the whole tumor of 0.86 (unsupervised) and 0.88 (supervised refinement method). Sun et al. [
      • Sun J.
      • Chen W.
      • Peng S.
      • Liu B.
      DRRNet: Dense residual refine networks for automatic brain tumor segmentation.
      ] used a densely connected, automatic 3D CNN-based method and was able to achieve a DICE of 0.84. Deng et al. [
      • Deng W.
      • Shi Q.
      • Luo K.
      • Yang Y.
      • Ning N.
      Brain tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature.
      ] integrated a fully convolutional neural networks (FCNN) and dense micro-block difference feature (DMDF) in a framework and reported an average DICE 0.91. Hoseini et al. [
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      An efficient implementation of deep convolutional neural networks for MRI segmentation.
      ] used the MICCAI 2016 dataset to evaluate their high-capacity Deep Convolutional Neural Network (DCNN), and presents a DICE of 0.90. In addition, the authors used the MICCAI 2015 and 2016 dataset to evaluate the proposed optimization algorithm (AdaptAhead) for learning the DCNN, which was based on a combination of Nesterov and RMSProp techniques and showed a DICE of 0.89 (2015 dataset) and 0.85 (2016 dataset)[
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation.
      ]. Li et al.[
      • Li H.
      • Li A.
      • Wang M.
      A novel end-to-end brain tumor segmentation method using improved fully convolutional networks.
      ] validated their convolutional network, based on a modification of the U-Net architecture and achieved DICE coefficients for the whole tumor of 0.89 (MICCAI 2015) and 0.88 (MICCAI 2017). Last, Pereira [
      • Pereira S.
      • Pinto A.
      • Amorim J.
      • Ribeiro A.
      • Alves V.
      • Silva C.A.
      Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks.
      ] et al used the MICCAI 2017 and 2013 datasets to validate their proposed method, a recombination of features and a spatially adaptive recalibration block that was adapted for semantic segmentation with a FCNN (SegSE) and presented DICE coefficients of 0.90 (MICCAI 2017) and 0.89 (MICCAI 2013).
      Figure thumbnail gr4
      Fig. 4Comparison of DICE similarity coefficient for the whole tumor, tumor core or enhanced lesion of the publications [
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      An efficient implementation of deep convolutional neural networks for MRI segmentation.
      ,
      • Hoseini F.
      • Shahbahrami A.
      • Bayat P.
      AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation.
      ,
      • Iqbal S.
      • Ghani Khan M.U.
      • Saba T.
      • Mehmood Z.
      • Javaid N.
      • Rehman A.
      • et al.
      Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.
      ,
      • Wang G.
      • Li W.
      • Zuluaga M.A.
      • Pratt R.
      • Patel P.A.
      • Aertsen M.
      • et al.
      Interactive medical image segmentation using deep learning with image-specific fine tuning.
      ,
      • Li H.
      • Li A.
      • Wang M.
      A novel end-to-end brain tumor segmentation method using improved fully convolutional networks.
      ,
      • Sun J.
      • Chen W.
      • Peng S.
      • Liu B.
      DRRNet: Dense residual refine networks for automatic brain tumor segmentation.
      ,
      • Deng W.
      • Shi Q.
      • Luo K.
      • Yang Y.
      • Ning N.
      Brain tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature.
      ,
      • Pereira S.
      • Pinto A.
      • Amorim J.
      • Ribeiro A.
      • Alves V.
      • Silva C.A.
      Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks.
      ] using the MICCAI Brain Tumor Segmentation Challenge datasets (BRATS 2015–2017).
      Additional studies on independent datasets are 1) The use of preoperative MRI scans (T1, T2, FLAIR and cT1) of glioblastoma patients to detect and segment tumor lesion based on a multilayer, multiscale convolutional neural network. Perkuhn et al. [
      • Perkuhn M.
      • Stavrinou P.
      • Thiele F.
      • Shakirin G.
      • Mohan M.
      • Garmpis D.
      • et al.
      Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine.
      ] showed in a dataset of 64 patients from 15 different institutes a high lesion detection rate and segmentation accuracy (DICE for whole tumor segmentation: 0.86), which was comparable to the interrater variability. 2) A multi-sequence MRI guided convolution neural network showed to accurately delineate post-operative gliomas on CT images for radiotherapy, to assist image segmentation and reduce workload [
      • Tang F.
      • Liang S.
      • Zhong T.
      • Huang X.
      • Deng X.
      • Zhang Y.u.
      • et al.
      Postoperative glioma segmentation in CT image using deep feature fusion model guided by multi-sequence MRIs.
      ].
      Challenges in the generation of deep learning applications for image analysis is the presence of a sufficient amount of annotated datasets. An approach in the development of annotated data for tumor segmentation is the use of eye-tracking technology. Segmentation masks were generated by eye-tracking technology and compared to hand annotation and the authors demonstrated that eye-tracking can be used to generate segmentation masks suitable for deep learning segmentation tasks [
      • Stember J.N.
      • Celik H.
      • Krupinski E.
      • Chang P.D.
      • Mutasa S.
      • Wood B.J.
      • et al.
      Eye tracking for deep learning segmentation using convolutional neural networks.
      ]. In addition, strategies are developed to circumvent the issue of limited annotated data. For example by building image classifiers using features from pre- trained segmentation networks. By using these segmentation networks, the machine can learn first from the simpler shapes and structures before tackling the actual classification problem. Using this methodology a high classification performance can be obtained with limited training data [
      • Wong K.C.L.
      • Syeda-Mahmood T.
      • Moradi M.
      Building medical image classifiers with very limited data using segmentation networks.
      ].
      To summarize; deep learning applications are in development for a broad range of technological or (pre-) processing steps of MRI. The additional value has been shown for the increase of MRI image quality (e.g. by reduction of contrast or artefacts), the generation of synthetic CT imaging from MRI allowing less imaging interventions for the individual patients and for the generation of automatic tissue and auto-contouring task with the potential to reduce workload within current processes.

      Diagnosis

      Twelve papers investigated the role of AI in the diagnosis of neurological neoplasms (Table 2). The main objectives of the papers involved: pathological or molecular classification of the tumors (N = 7) [
      • Banzato T.
      • Causin F.
      • Della Puppa A.
      • Cester G.
      • Mazzai L.
      • Zotti A.
      Accuracy of deep learning to differentiate the histopathological grading of meningiomas on MR images: A preliminary study.
      ,
      • Deepak S.
      • Ameer P.M.
      Brain tumor classification using deep CNN features via transfer learning.
      ,
      • Jun Y.
      • Eo T.
      • Kim T.
      • Shin H.
      • Hwang D.
      • Bae S.H.
      • et al.
      Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors.
      ,
      • Swati Z.N.K.
      • Zhao Q.
      • Kabir M.
      • Ali F.
      • Ali Z.
      • Ahmed S.
      • et al.
      Brain tumor classification for MR images using transfer learning and fine-tuning.
      ,
      • Zhu Y.
      • Man C.
      • Gong L.
      • Dong D.i.
      • Yu X.
      • Wang S.
      • et al.
      A deep learning radiomics model for preoperative grading in meningioma.
      ,

      S. Maki et al., A Deep Convolutional Neural Network With Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma, Spine (Phila Pa 1976), vol. 45, no. 10, pp. 694–700, May 2020, doi: 10.1097/BRS.0000000000003353.

      ,

      Z. Li, Y. Wang, J. Yu, Y. Guo, and W. Cao, Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma, Sci Rep, vol. 7, no. 1, p. 5467, 14 2017, doi: 10.1038/s41598-017-05848-2.

      ] solely detection of tumor in a Computer Aided Diagnosis (CAD) fashion (N = 1) [
      • Atici M.A.
      • Sagiroglu S.
      • Celtikci P.
      • Ucar M.
      • Borcek A.O.
      • Emmez H.
      • et al.
      A novel deep learning algorithm for the automatic detection of high-grade gliomas on T2-weighted magnetic resonance images: A preliminary machine learning study.
      ] and the combination of detection and segmentation of the lesions (N = 4) [
      • Laukamp K.R.
      • Thiele F.
      • Shakirin G.
      • Zopfs D.
      • Faymonville A.
      • Timmer M.
      • et al.
      Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI.
      ,
      • Peeken J.C.
      • Molina-Romero M.
      • Diehl C.
      • Menze B.H.
      • Straube C.
      • Meyer B.
      • et al.
      Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy.
      ,
      • Sert E.
      • Özyurt F.
      • Doğantekin A.
      A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network.
      ,
      • Zhou Z.
      • Sanders J.W.
      • Johnson J.M.
      • Gule-Monroe M.K.
      • Chen M.M.
      • Briere T.M.
      • et al.
      Computer-aided Detection of Brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors.
      ]. The number of patients used in the studies span from a minimum of 33 patients [
      • Peeken J.C.
      • Molina-Romero M.
      • Diehl C.
      • Menze B.H.
      • Straube C.
      • Meyer B.
      • et al.
      Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy.
      ] to a maximum of 266 patients [
      • Zhou Z.
      • Sanders J.W.
      • Johnson J.M.
      • Gule-Monroe M.K.
      • Chen M.M.
      • Briere T.M.
      • et al.
      Computer-aided Detection of Brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors.
      ]. Sert et al. [
      • Sert E.
      • Özyurt F.
      • Doğantekin A.
      A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network.
      ] used a publicly available dataset from the cancer imaging archive (TCIA; TCGA-GBM), which includes more than 500 samples; however the authors selected 100 positives (including at least a tumor) and 100 negative samples (healthy subjects) to train the Convolutional Neural network (CNN; ResNet architecture) .
      Table 2Overview of the articles in the category 'diagnosis'.
      TitleFirst AuthorJournalYearStudy typeGoal of studyPatient populationsample sizeAI technologyMRI
      Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI.Laukamp KR et al.Eur Radiol.2019RetrospectiveTo investigate the reliability of automated detection and segmentation of grade I and II meningiomas using a DL model on multiparametric MRI data from diverse scanners including referring institutions.meningeoma56 patientsDeepMedic architecture using a deep 3D CNNT1-w, cT1-w, T2, FLAIR
      A deep learning radiomics model for preoperative grading in meningioma.Zhu et al.Eur J Radiol.2019RetrospectiveTo develop and validate a DL Radiomics model for meningioma grading based on routine post-contrast T1W before surgerymeningeoma181 patientsPretrained CNN (Xception)cT1-w
      Brain tumor classification using deep CNN features via transfer learning.Deepak et al.Comput Biol Med.2019RetrospectiveTo present an accurate and automatic classification system designed for three pathological types of brain tumor. Tglioma, meningioma, pituitary tumor3064 brain MRI images from 233 patientsPretrained CNN (GoogLeNet)cT1-w
      Brain tumor classification for MR images using transfer learning and fine-tuning.Swati et al.Comput Med Imaging Graph.2019RetrospectiveTo propose a new approach for brain tumor image classification based on transfer learning and fine-tuning.glioma, meningioma, pituitary tumor3064 brain MRI images from 233 patientsPretrained CNN (VGG19)cT1-w
      A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network.Sert et al.Med Hypotheses.2019RetrospectiveTo propose a brain tumor diagnosis approach using single image super resolution based maximum fuzzy entropy segmentation and CNN (SISR-MFES-CNN).GBM200 imagesSISR-MFES-CNN (ResNet)cT1-w
      Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma.Li et al.Sci Rep.2017RetrospectiveTo present the performance of DL radiomics for predicting the mutation status of isocitrate dehydrogenase 1 (IDH1) in patients with low-grade gliomaglioma151 patientsCNN architecture with convolutional layers followed by fully connected layersT2, FLAIR, cT1-w
      A Deep Convolutional Neural Network With Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma.Maki et al.Spine2020RetrospectiveTo evaluate the performance of our CNN in differentiating between spinal schwannoma and meningioma on MRI.spinal schwannoma and meningioma84 patientsPretrained CNN (InceptionV3)cT1-w, T2-w
      Accuracy of deep learning to differentiate the histopathological grading of meningiomas on MR images: A preliminary study.Banzato et al.J Magn Reson Imaging2019RetrospectiveTo determine the diagnostic accuracy of a deep CNN in the differentiation of the histopathological grading of meningiomas from MR images.meningeoma117 patientsPretrained CNN (Inception-V3 and AlexNet)cT1-w, ADC
      Computer-aided Detection of Brain Metastases in T1-weighted MRI for Stereotactic Radiosurgery Using Deep Learning Single-Shot Detectors.Zhou et al.Radiology2020RetrospectiveTo develop and investigate DL methods for detecting brain metastasis with MRI to aid in treatment planning for Stereotactic Radiosurgery.brain metastases266 patientsDeep-learning single-shot detector modelscT1-w
      A Novel Deep Learning Algorithm for the Automatic Detection of High-Grade Gliomas on T2-Weighted Magnetic Resonance Images: A Preliminary Machine Learning Study.Atici et al.Turk Neurosurg2020RetrospectiveTo propose a convolutional neural network (CNN) for the automatic detection of high-grade gliomas on T2-w MRI.high grade glioma179 patientsCNN architectures with convolutional layers followed by fully connected layersT2-w
      Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy.Peeken et al.Radiother Oncol.2019RetrospectiveTo apply DL based free water correction of DTI scans to estimate the infiltrative gross tumor volume inside of the FLAIR hyperintense region.GBM33 patientsNeural network for signal deconvolution as described previouslyDTI, T1-2, cT1-w, T2-w, FLAIR
      Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors.Jun et al.Ahn SS. Sci Rep.2018RetrospectiveTo propose a DL 3D BB imaging with an auto-labelling technique and 3D CNN for brain metastases detection without additional BB scan.suspected brain metastasis65 patientsCNN comprised of only convolutional layersCE 3D-GRE imaging & BB imaging
      All of the studies were retrospective. Half of the studies used data collected from a single institution and the other half collected data from at least two different institutions (minimum 2, maximum 37 centres). Four studies were focused on meningioma [
      • Banzato T.
      • Causin F.
      • Della Puppa A.
      • Cester G.
      • Mazzai L.
      • Zotti A.
      Accuracy of deep learning to differentiate the histopathological grading of meningiomas on MR images: A preliminary study.
      ,
      • Zhu Y.
      • Man C.
      • Gong L.
      • Dong D.i.
      • Yu X.
      • Wang S.
      • et al.
      A deep learning radiomics model for preoperative grading in meningioma.
      ,

      S. Maki et al., A Deep Convolutional Neural Network With Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma, Spine (Phila Pa 1976), vol. 45, no. 10, pp. 694–700, May 2020, doi: 10.1097/BRS.0000000000003353.

      ,
      • Laukamp K.R.
      • Thiele F.
      • Shakirin G.
      • Zopfs D.
      • Faymonville A.
      • Timmer M.
      • et al.
      Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI.
      ], two studies on glioblastoma [
      • Peeken J.C.
      • Molina-Romero M.
      • Diehl C.
      • Menze B.H.
      • Straube C.
      • Meyer B.
      • et al.
      Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy.
      ,
      • Sert E.
      • Özyurt F.
      • Doğantekin A.
      A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network.
      ]. Two studies used glioma patients [

      Z. Li, Y. Wang, J. Yu, Y. Guo, and W. Cao, Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma, Sci Rep, vol. 7, no. 1, p. 5467, 14 2017, doi: 10.1038/s41598-017-05848-2.

      ,
      • Atici M.A.
      • Sagiroglu S.
      • Celtikci P.
      • Ucar M.
      • Borcek A.O.
      • Emmez H.
      • et al.
      A novel deep learning algorithm for the automatic detection of high-grade gliomas on T2-weighted magnetic resonance images: A preliminary machine learning study.
      ]. Two studies used different solid brain tumors [
      • Deepak S.
      • Ameer P.M.
      Brain tumor classification using deep CNN features via transfer learning.
      ,
      • Swati Z.N.K.
      • Zhao Q.
      • Kabir M.
      • Ali F.
      • Ali Z.
      • Ahmed S.
      • et al.
      Brain tumor classification for MR images using transfer learning and fine-tuning.
      ]. One study used brain metastases arising from other primary tumors [
      • Jun Y.
      • Eo T.
      • Kim T.
      • Shin H.
      • Hwang D.
      • Bae S.H.
      • et al.
      Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors.
      ].

      Brain tumor classification

      Accurate diagnosis of brain lesions is essential for selecting an effective treatment. The classification of brain tumor into subtypes is a challenging research problem. Deepak et al. [
      • Deepak S.
      • Ameer P.M.
      Brain tumor classification using deep CNN features via transfer learning.
      ] proposed an automatic classification system based on GoogleNet CNN architecture and used it to identify three pathological subtypes of brain tumors (e.g. glioma, meningioma and pituitary tumor) on cT1w imaging. A mean classification accuracy of 98% was achieved. Swati et al. [
      • Swati Z.N.K.
      • Zhao Q.
      • Kabir M.
      • Ali F.
      • Ali Z.
      • Ahmed S.
      • et al.
      Brain tumor classification for MR images using transfer learning and fine-tuning.
      ] approached the same problem by using a pre-trained deep CNN model and a block-wise fine-tuning strategy based on transfer learning, and achieved an average accuracy of 95%. To differentiate meningeomas and spinal schwannomas, the most frequent tumors of the spinal cord, Maki et al. [

      S. Maki et al., A Deep Convolutional Neural Network With Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma, Spine (Phila Pa 1976), vol. 45, no. 10, pp. 694–700, May 2020, doi: 10.1097/BRS.0000000000003353.

      ] used a convolutional neural network (CNN) and was able to make a discrimination with an accuracy of AUC = 0.88 and 0.87 for T2 weighted imaging and contrast enhanced T1w imaging respectively.

      Meningioma

      The detection and grading of meningioma is important to select the suitable treatment for the individual patient. Laukamp et al. [
      • Laukamp K.R.
      • Thiele F.
      • Shakirin G.
      • Zopfs D.
      • Faymonville A.
      • Timmer M.
      • et al.
      Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI.
      ] used a multiparametric deep-learning model on multiple MRI sequences including T1 (with or without contrast), T2 and FLAIR and was able to automatically detect meningiomas in 55 out of 56 cases. In addition, a deep CNN was able to discriminate between benign and atypical/anaplastic meningiomas from the apparent diffusion coefficient (ADC) maps, with an accuracy of AUC: 0.94 [
      • Banzato T.
      • Causin F.
      • Della Puppa A.
      • Cester G.
      • Mazzai L.
      • Zotti A.
      Accuracy of deep learning to differentiate the histopathological grading of meningiomas on MR images: A preliminary study.
      ].
      Radiomics is an image analysis method that uses a series of qualitative and quantitative analyses of high-throughput image features to obtain predictive or prognostic information from medical images. Zhu et al. [
      • Zhu Y.
      • Man C.
      • Gong L.
      • Dong D.i.
      • Yu X.
      • Wang S.
      • et al.
      A deep learning radiomics model for preoperative grading in meningioma.
      ] used a deep learning radiomics model to pre-operatively assess the grade of the meningioma. In the study they compared deep features with more traditional hand-crafted features. The deep learning model could significantly increase the discrimination between high and low-grade meningioma with an AUC of 0.811 compared to the AUC of 0.678 for the hand-crafted features.

      Glioma & glioblastoma

      In all neurological lesions the ability to classify lesions according to the WHO classification is essential, for which currently invasive methods like biopsy or resection are used [
      • Kristensen B.W.
      • Priesterbach-Ackley L.P.
      • Petersen J.K.
      • Wesseling P.
      Molecular pathology of tumors of the central nervous system.
      ,
      • Louis D.N.
      • Perry A.
      • Burger P.
      • Ellison D.W.
      • Reifenberger G.
      • von Deimling A.
      • et al.
      International Society Of Neuropathology-Haarlem consensus guidelines for nervous system tumor classification and grading.
      ]. Besides the risk of surgery related complication there is the risk of a sample error. For a public dataset of the Cancer Genome Atlas Glioblastoma Multiforme, Sert et al. [
      • Sert E.
      • Özyurt F.
      • Doğantekin A.
      A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network.
      ] developed an approach to use super resolution (SR; converting low-resolution input images into high-resolution images), and maximum fuzzy entropy segmentation (MFES) in combination with a convolution neural network to increase the classification performance between benign/malignant lesions. Out of 200 samples (100 benign, 100 malignant), a total of 10 false positive/false negative cases were observed (AUC = 0.98). The automatic detection of lesions is an additional benefit of DL in the diagnostic phase of cancer care. Atici et al. [
      • Atici M.A.
      • Sagiroglu S.
      • Celtikci P.
      • Ucar M.
      • Borcek A.O.
      • Emmez H.
      • et al.
      A novel deep learning algorithm for the automatic detection of high-grade gliomas on T2-weighted magnetic resonance images: A preliminary machine learning study.
      ] developed a CNN using 3580 images from 179 patients for the automatic detection of high-grade gliomas on T2w MRI images and report an acceptable performance with an accuracy between 0.85 and 0.94 and precision between 0.81 and 0.98.
      In addition, deep learning based radiomics can be used in the diagnosis of low-grade gliomas to detect patients with the isocitrate dehydrogenase 1 (IDH1) mutation. Note that, the IDH1 mutation status accounts for a large proportion of the predictive value in low-grade glioma and the treatment regimen is currently defined according to IDH1 status. Li et al. [

      Z. Li, Y. Wang, J. Yu, Y. Guo, and W. Cao, Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma, Sci Rep, vol. 7, no. 1, p. 5467, 14 2017, doi: 10.1038/s41598-017-05848-2.

      ] demonstrated that deep learning radiomics on FLAIR and cT1-w MRI imaging has potential to predict IDH1 mutation in low-grade gliomas (AUC 0.92).
      Also, in the preparation of the treatment, accurate target definition is of utmost importance to increase the outcome from either radiotherapy or neurosurgery. Peeken et al. [
      • Peeken J.C.
      • Molina-Romero M.
      • Diehl C.
      • Menze B.H.
      • Straube C.
      • Meyer B.
      • et al.
      Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy.
      ] used a fully connected neural network in combination with DTI images to define the infiltrative tumor areas of GBM, with the purpose to guide radiation treatment. This novel infiltrating tumor definition was in all patients related to the location of tumor recurrences and has potential to further individualize radiotherapy treatment.
      To summarize, deep learning has the potential to contribute to the automatic classification of lesions with regard to disease type, grade or mutational status. tumorAll these developments can contribute to improved cancer care with regard to more efficient diagnostic workflow as well as a step towards more personalized treatment.

      Follow-up

      Seven publications were related to the use of DL and MRI for the follow-up of neuro-oncological patients. All publications used retrospective data of glioma or glioblastoma patients (Table 3).
      Table 3Overview of the articles in the category follow-up’.
      TitleFirst AuthorJournalYearstudy typeGoal of studypatient populationsample sizeAI technologyMRI
      A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme.Lao J et al.Sci Rep2017RetrospectiveTo investigate if deep features extracted via transfer learning can generate radiomics signatures for prediction of overall survival in patients with GBM.GBM112 patientsPre-trained CNN via transfer learningT1-w, cT1-w, T2, FLAIR
      Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement.Chang K et al.Neuro Oncol.2019RetrospectiveThe development of an algorithm that automatically segments FLAIR hyperintensity and contrast-enhancing tumor, quantitating tumor volumes as well as the product of maximum bidimensional diameters according to the RANO criteria (AutoRANO).low-grade glioma high grade glioma GBM897 patients3D U-Net architectureFLAIR, T1-w, cT1-w
      Deep Transfer Learning and Radiomics Feature Prediction of Survival of Patients with High-Grade Gliomas.Han et al.AJNR AM J Neuroradiol2020RetrospectiveThe production of a combined DL and radiomics model to predict overall survival in patients with high-grade gliomas.High grade glioma178 patientspretrained convolutinal neural networkcT1-w
      Deep learning in the detection of high-grade glioma recurrence using multiple MRI sequences: A pilot study.Bacchi et al.J Clin Neurosci.2019RetrospectiveTo determine the accuracy with which CNN could predict recurrence/progression vs treatment related changes using multiple MRI sequenceshigh grade glioma55 patientsCNNDWI, ADC, FLAIR and cT1-w
      Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages.Nie D. et al.Sci Rep.2019RetrospectiveTo predict the overall survival (OS) time of high-grade gliomas patient.high grade glioma93 patients3D CNN (Caffe49)T1-w, DTI, rs-fMRI
      3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients.Nie D. et alMed Image Comput Comput Assist Interv.2016retrospectiveTo automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients.high grade glioma69 patients3D convolutional neural networks (CNNs)cT1-w, resting state fMRI, DTI
      Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study.Kickingereder et al.Lancet Oncol2019RetrospectiveTo develop a framework relying on artificial neural networks (ANNs) for fully automated quantitative analysis of MRI in neuro-oncologyglioma/GBM1027 patientsArtificial Neural Networks (ANN)T1-w, cT1-w, FLAIR, T2-w

      Response assessment in Neuro-Oncology (RANO)

      To assess the treatment response of neuro-oncology patients the Response Assessment in Neuro-Oncology (RANO) criteria are frequently used. These criteria are generally accepted to assess response in clinical trials and are increasingly used in clinical practise. The RANO criteria divides response in four types (complete response, partial response, stable disease or progression) based on MRI and clinical features. Chang et al. [

      K. Chang et al., Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement, Neuro Oncol, vol. 21, no. 11, pp. 1412–1422, 04 2019, doi: 10.1093/neuonc/noz106.

      ] developed an automatic pipeline for brain extraction, tumor segmentation and RANO measurements and applied it in two patient cohorts; low- or high-grade gliomas (843 patients with 843 MRI scans) and newly diagnosed glioblastomas (54 patients with 713 MRI scans). To develop the deep learning algorithm, they utilized the 3D U-Net architecture, and automatically segmented the FLAIR hyper intensity, contrast-enhancing tumor, tumor volumes and the product of maximum diameters, according to RANO criteria. The Automatic RANO measurement was reproducible when evaluating the change in tumor burden during treatment, with an intraclass correlation coefficient (ICC) between automatic and manual delta RANO measurements of 0.85 (P < 0.001).
      A disadvantage of the current RANO criteria is that it relies on 2D measurements. The use of 3D measurements for assessing tumor response could provide a more reliable result, but due to the significant workload of 3D manual assessment, it has practical limitations. For this reason, Kickingereder et al. [
      • Kickingereder P.
      • Isensee F.
      • Tursunova I.
      • Petersen J.
      • Neuberger U.
      • Bonekamp D.
      • et al.
      Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study.
      ] developed an infrastructure to enable full-automated analysis of MRI and investigated its performance for tumor response assessment. The assessment of tumor response based on their neural network outperformed the RANO assessment as a predictor for overall survival in an EORTC dataset, in addition the automatic assessment of tumor response showed a higher agreement to radiologist assessment than the use of RANO criteria. Based on both studies artificial intelligence has shown additional value in the automation of the RANO assessment as well as improving it.

      Deep features

      In the assessment of overall survival deep radiomics features have the potential to provide additional value as well. Transfer learning can be used within the current radiomics models for the extraction of a large number of deep features from the hidden layers of CNN. Deep features contain more abstract information of the MRI images and potentially provide more predictive patterns compared to handcrafted features. In patients with GBM deep features extracted via transfer learning on multi-modality MR images (T1, T1C, T2 and T2 FLAIR) were used to generate a radiomics signature based on six features for the prediction of overall survival [

      J. Lao et al., A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme, Sci Rep, vol. 7, no. 1, p. 10353, 04 2017, doi: 10.1038/s41598-017-10649-8.

      ]. The proposed radiomics signature showed a higher performance (C-index 0.710) compared to general risk factors (age and KPS) and the combination of deep-learning based radiomics and general risk factors improved the predictive performance to a C-index of 0.739 [

      J. Lao et al., A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme, Sci Rep, vol. 7, no. 1, p. 10353, 04 2017, doi: 10.1038/s41598-017-10649-8.

      ]. In addition, the combination of standard radiomics features in combination with deep features was successful in an initial validation in cT1-w MRI images [
      • Han W.
      • Qin L.
      • Bay C.
      • Chen X.
      • Yu K.-H.
      • Miskin N.
      • et al.
      Deep transfer learning and radiomics feature prediction of survival of patients with high-grade gliomas.
      ].

      Advanced MRI

      Most studies use standard cT1-w, T1-w, T2-w or FLAIR MRI acquisitions, however the use of additional MRI sequences, like DWI, DTI or fMRI in the follow-up were described in three articles [
      • Bacchi S.
      • Zerner T.
      • Dongas J.
      • Asahina A.T.
      • Abou-Hamden A.
      • Otto S.
      • et al.
      Deep learning in the detection of high-grade glioma recurrence using multiple MRI sequences: A pilot study.
      ,
      • Nie D.
      • Zhang H.
      • Adeli E.
      • Liu L.
      • Shen D.
      3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients.
      ,

      D. Nie et al., Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages, Sci Rep, vol. 9, no. 1, p. 1103, 31 2019, doi: 10.1038/s41598-018-37387-9.

      ].
      Bacchi et al. [
      • Bacchi S.
      • Zerner T.
      • Dongas J.
      • Asahina A.T.
      • Abou-Hamden A.
      • Otto S.
      • et al.
      Deep learning in the detection of high-grade glioma recurrence using multiple MRI sequences: A pilot study.
      ] aimed to distinguish high grade glioma progression from treatment related changes like pseudoprogression or radionecrosis. For this purpose, the authors performed classification experiments using a CNN on DWI, ADC, FAIR and c-T1-w images. The DWI sequence had the best performance (AUC 0.63, Accuracy 0.73) and these DWI images were used in combination with other sequences. The combination of DWI with FLAIR sequences showed the highest performance (AUC: 0.80, Accuracy: 0.82), which shows that DL may be useful in distinguishing progression from treatment induced normal brain tissue changes.
      To predict overall survival in patients with high-grade glioma Nie et al. proposed a CNN architecture on multi-model pre-operative MRI images (T1w, fMRI and DTI) to train a survival time prediction model. This method was used to extract features from the image modalities in a supervised manner and train a Support Vector Machine to predict overall survival time. The experimental results showed that both fMRI and DTI played a more significant role compared to conventional T1 MRI, in building a successful prediction model [
      • Nie D.
      • Zhang H.
      • Adeli E.
      • Liu L.
      • Shen D.
      3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients.
      ]. In addition, in a subsequent study the authors show in 68 high-grade glioma patients, that a combination of the features extracted by their multi-modality, multi-channel deep survival prediction framework in combination with demographic and tumor-related features have an accuracy of 91% to predict overall survival [

      D. Nie et al., Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages, Sci Rep, vol. 9, no. 1, p. 1103, 31 2019, doi: 10.1038/s41598-018-37387-9.

      ].
      In brief, deep learning applications have shown their potential in the automation of response assessment criteria (RANO), improve the response assessment by including 3D information as well as improve the prediction of overall survival by the inclusion of deep features or advanced MRI sequences and to distinguish between progression and treatment related changes on MRI. None of these methods though were extensively implemented or validated in current clinical practice but do show promising avenues for further investigation.

      Discussion & Conclusion

      The use of DL methods in the analysis of MRI data in neuro-oncology is rising. This study provides an overview of the current use of DL in the field of neuro-oncological MRI. Forty-one publications were reviewed and covered a broad range of applications, from technological innovations to improving diagnosis and follow-up.
      We observed that the majority of publications were in the category of technological innovations. This is evident since these technological developments precede the clinical applications in diagnosis and follow-up. The technological innovations have a variety of impact on the clinical applicability of MRI technology or on its efficiency, e.g. by reduction of workload. In addition, the availability of open data to develop DL technology for brain tumor image segmentation, directly results in a significant number of publications. These Brain Tumor Segmentation challenges (BRATS) are repeated on a yearly basis. The challenge of 2020 had on top of the tumor segmentation challenge, also the focus on the prediction of patient overall survival and had initially a task planned to differentiate between pseudoprogression and true tumor recurrence. Based on these (proposed) challenges we can only expect the field to move rapidly in development of additional DL applications to support patient follow-up as well.
      Challenges in the use of deep learning are amongst others, the generalisability of the models to different institutes and MRI scanners, as well as the accessibility to a large amount of annotated data to develop, train and externally validate the model. By using data from several institutes, a DL model is exposed to a larger range of data variations, which will generate a more robust and broader applicable model. There are, however, several barriers when trying to share clinical imaging data, considering technical, ethical, political and administrative issues [
      • Sullivan R.
      • Peppercorn J.
      • Sikora K.
      • Zalcberg J.
      • Meropol N.J.
      • Amir E.
      • et al.
      Delivering affordable cancer care in high-income countries.
      ]. Therefore, the ability to train a deep learning model without sharing the data by using distributed or federated learning is a promising approach. Using federated learning, models can be developed on data of different institutes and therefore a larger and more diverse dataset. Czeitzler et al. [
      • Czeizler E.
      • Wiessler W.
      • Koester T.
      • Hakala M.
      • Basiri S.
      • Jordan P.
      • et al.
      Using federated data sources and Varian Learning Portal framework to train a neural network model for automatic organ segmentation.
      ] recently showed the ability to train deep neural network model for organ segmentation in a distributed manner with similar performance to a centralized approach. Distributed learning is an active field of research as well as the use of deep learning on imaging data. The combination of both research fields could be a promising next step. Furthermore, clinical introduction of these AI methods requires a careful consideration in training, commissioning and acceptance of such models, recently published guidelines could be followed for this [
      • Vandewinckele L.
      • Claessens M.
      • Dinkla A.
      • Brouwer C.
      • Crijns W.
      • Verellen D.
      • et al.
      Overview of artificial intelligence-based applications in radiotherapy: Recommendations for implementation and quality assurance.
      ].
      In our review, all studies related to ‘follow-up’ were focused on its applicability to evaluate tumor response or treatment outcome (e.g. overall survival). This can be related to the fact that most studies used data of patients with high grade glioma or glioblastoma with a relative poor survival. Nevertheless, for patients with a brain lesion with a relative long life expectancy, like meningioma patients, the prediction of treatment induced side effects could have additional value as well. We expect additional value from cross-disciplinary research combining the knowledge from general neuro-imaging, neuropsychology and specific research fields in dementia, Parkinson’s disease or epilepsy. For example, after high dose radiotherapy for primary or metastatic brain tumors 50–90% of greater than 6 months’ survivors develop irreversible disabling cognitive decline leading to premature loss of independence, reduced Quality of Life (QOL) as well as significant economic burden both at the individual as societal level [
      • Makale M.T.
      • McDonald C.R.
      • Hattangadi-Gluth J.A.
      • Kesari S.
      Mechanisms of radiotherapy-associated cognitive disability in patients with brain tumours.
      ]. Therefore, evaluation methods to assess and predict side effects after treatment would be beneficial to allow the development of optimized treatment strategies.
      To allow the evaluation of treatment induced side effects, and to target oncological treatment, one should take into account the healthy brain structures and their susceptibility for treatment induced damage. The auto segmentation of structures at risk can have additional benefits on top of automatic tumor segmentation. For the neuro-oncological domain there is a consensus-based atlas available for CT- and MRI based contouring from the European Particle Therapy Network [

      D. B. Eekers et al., The EPTN consensus-based atlas for CT- and MR-based contouring in neuro-oncology, Radiother Oncol, vol. 128, no. 1, pp. 37–43, 2018, doi: 10.1016/j.radonc.2017.12.013.

      ]. Auto segmentation of these brain structures could provide advantages by improving quality, due to the reduction of inter observer variation and improve on efficiency by reduction of delineation time.
      The majority of publications made use of the standard MRI sequences available like T1w, cT1w, T2 or FLAIR imaging. The most straightforward reason for this is the availability and quantity of these imaging data. Nevertheless, in a few studies used advanced MRI imaging techniques (e.g. fMRI, DTI, DWI) and presented favourable results in comparison to standard imaging techniques [
      • Bacchi S.
      • Zerner T.
      • Dongas J.
      • Asahina A.T.
      • Abou-Hamden A.
      • Otto S.
      • et al.
      Deep learning in the detection of high-grade glioma recurrence using multiple MRI sequences: A pilot study.
      ,

      D. Nie et al., Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages, Sci Rep, vol. 9, no. 1, p. 1103, 31 2019, doi: 10.1038/s41598-018-37387-9.

      ].Furthermore, perfusion MRI imaging is another area that requires some elaborate post-processing and the some initial investigations of the use of deep learning for perfusion MRI in neuro-oncology are ongoing [
      • Winder A.
      • d’Esterre C.D.
      • Menon B.K.
      • Fiehler J.
      • Forkert N.D.
      Automatic arterial input function selection in CT and MR perfusion datasets using deep convolutional neural networks.
      ,
      • Nalepa J.
      • Ribalta Lorenzo P.
      • Marcinkiewicz M.
      • Bobek-Billewicz B.
      • Wawrzyniak P.
      • Walczak M.
      • et al.
      Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors.
      ,

      J. E. Park et al., Diffusion and perfusion MRI radiomics obtained from deep learning segmentation provides reproducible and comparable diagnostic model to human in post-treatment glioblastoma, Eur Radiol, Oct. 2020, doi: 10.1007/s00330-020-07414-3.

      ].
      The publications in this review all originate from the past four years, which shows that the development of deep learning technology is evolving rapidly for the application within neuro-oncological MRI. Nevertheless, the clinical use of these models is still limited. Shortlife et al. [
      • Shortliffe E.H.
      • Sepúlveda M.J.
      Clinical decision support in the era of artificial intelligence.
      ] presents six challenges in the implementation of AI in clinical support systems: black boxes are unacceptable, time is a scarce resource, intuitive and simple, relevance and insight are essential, inform and assist not replace clinician and the scientific foundation must be strong. Where the first one (black boxes are unacceptable) could be the biggest challenges in the clinical application of DL in clinical practice an emphasis should be given to develop explainable AI. Also, ethical dilemmas, like balancing the advantages and risks of using AI technology, as well as the role of AI in the medical education (e.g. how do we prepare future clinicians for the use of AI) and potential legal conflicts while using AI (who is responsible when using a black-box AI?) can slow down implementation of these novel technologies [
      • Rigby M.J.
      Ethical dimensions of using artificial intelligence in health care.
      ].
      To conclude, Deep learning in MRI for neuro-oncology is a novel field of research, it has shown to have potential in a broad range of applications. Nevertheless, challenges remain the accessibility of large representative imaging datasets, the applicability of the models across institutes and MRI vendors and the potential barriers to implement these AI technologies in clinical practise.

      Acknowledgements

      This research was supported by a grant from ZonMW, project number 10070012010002 (AMICUS) and Kankerbestrijding and NWO Domain AES, as part of their joint strategic research programme: Technology for Oncology II. The collaboration project is co-funded by the PPP Allowance made available by Health ~ Holland, Top Sector Life Sciences & Health, to stimulate public private partnerships’.

      References

      1. J. D. Rudie, A. M. Rauschecker, R. N. Bryan, C. Davatzikos, and S. Mohan, Emerging Applications of Artificial Intelligence in Neuro-Oncology, Radiology, vol. 290, no. 3, Art. no. 3, Mar. 2019, doi: 10.1148/radiol.2018181928.

        • Sahiner B.
        • Pezeshk A.
        • Hadjiiski L.M.
        • Wang X.
        • Drukker K.
        • Cha K.H.
        • et al.
        Deep learning in medical imaging and radiation therapy.
        Med Phys. 2019; 46: e1-e36https://doi.org/10.1002/mp.2019.46.issue-110.1002/mp.13264
      2. Y. LeCun, Y. Bengio, and G. Hinton, Deep learning, Nature, vol. 521, no. 7553, Art. no. 7553, May 2015, doi: 10.1038/nature14539.

      3. S. M. McKinney et al., International evaluation of an AI system for breast cancer screening, Nature, vol. 577, no. 7788, Art. no. 7788, Jan. 2020, doi: 10.1038/s41586-019-1799-6.

        • Kickingereder P.
        • Isensee F.
        • Tursunova I.
        • Petersen J.
        • Neuberger U.
        • Bonekamp D.
        • et al.
        Automated quantitative tumour response assessment of MRI in neuro-oncology with artificial neural networks: a multicentre, retrospective study.
        Lancet Oncol. 2019; 20: 728-740https://doi.org/10.1016/S1470-2045(19)30098-1
        • Le Bihan D.
        • Mangin J.-F.
        • Poupon C.
        • Clark C.A.
        • Pappata S.
        • Molko N.
        • et al.
        Diffusion tensor imaging: concepts and applications.
        J Magn Reson Imaging. 2001; 13: 534-546https://doi.org/10.1002/jmri.1076
        • Gong E.
        • Pauly J.M.
        • Wintermark M.
        • Zaharchuk G.
        Deep learning enables reduced gadolinium dose for contrast-enhanced brain MRI.
        J Magn Reson Imaging. 2018; 48: 330-340https://doi.org/10.1002/jmri.25970
        • Gurbani S.S.
        • et al.
        A convolutional neural network to filter artifacts in spectroscopic MRI.
        Magn Reson Med. 2018; 80: 1765-1775https://doi.org/10.1002/mrm.27166
        • Gurbani S.S.
        • Sheriff S.
        • Maudsley A.A.
        • Shim H.
        • Cooper L.A.D.
        Incorporation of a spectral model in a convolutional neural network for accelerated spectral fitting.
        Magn Reson Med. 2019; 81: 3346-3357https://doi.org/10.1002/mrm.27641
      4. F. Isensee et al., Automated brain extraction of multisequence MRI using artificial neural networks, Hum Brain Mapp, vol. 40, no. 17, pp. 4952–4964, 01 2019, doi: 10.1002/hbm.24750.

        • Kazemifar S.
        • McGuire S.
        • Timmerman R.
        • Wardak Z.
        • Nguyen D.
        • Park Y.
        • et al.
        MRI-only brain radiotherapy: Assessing the dosimetric accuracy of synthetic CT images generated using a deep learning approach.
        Radiother Oncol. 2019; 136: 56-63https://doi.org/10.1016/j.radonc.2019.03.026
        • Neppl S.
        • Landry G.
        • Kurz C.
        • Hansen D.C.
        • Hoyle B.
        • Stöcklein S.
        • et al.
        Evaluation of proton and photon dose distributions recalculated on 2D and 3D Unet-generated pseudoCTs from T1-weighted MR head scans.
        Acta Oncol. 2019; 58: 1429-1434https://doi.org/10.1080/0284186X.2019.1630754
        • Liu F.
        • Yadav P.
        • Baschnagel A.M.
        • McMillan A.B.
        MR-based treatment planning in radiation therapy using a deep learning approach.
        J Appl Clin Med Phys. Mar. 2019; 20: 105-114https://doi.org/10.1002/acm2.12554
        • Dinkla A.M.
        • Wolterink J.M.
        • Maspero M.
        • Savenije M.H.F.
        • Verhoeff J.J.C.
        • Seravalli E.
        • et al.
        MR-only brain radiation therapy: dosimetric evaluation of synthetic CTs generated by a dilated convolutional neural network.
        Int J Radiat Oncol Biol Phys. 2018; 102: 801-812https://doi.org/10.1016/j.ijrobp.2018.05.058
        • Hoseini F.
        • Shahbahrami A.
        • Bayat P.
        An efficient implementation of deep convolutional neural networks for MRI segmentation.
        J Digit Imaging. 2018; 31: 738-747https://doi.org/10.1007/s10278-018-0062-2
        • Hoseini F.
        • Shahbahrami A.
        • Bayat P.
        AdaptAhead optimization algorithm for learning deep CNN applied to MRI segmentation.
        J Digit Imaging. 2019; 32: 105-115https://doi.org/10.1007/s10278-018-0107-6
        • Iqbal S.
        • Ghani Khan M.U.
        • Saba T.
        • Mehmood Z.
        • Javaid N.
        • Rehman A.
        • et al.
        Deep learning model integrating features and novel classifiers fusion for brain tumor segmentation.
        Microsc Res Tech. 2019; 82: 1302-1315https://doi.org/10.1002/jemt.23281
        • Wang G.
        • Li W.
        • Zuluaga M.A.
        • Pratt R.
        • Patel P.A.
        • Aertsen M.
        • et al.
        Interactive medical image segmentation using deep learning with image-specific fine tuning.
        IEEE Trans Med Imaging. 2018; 37: 1562-1573https://doi.org/10.1109/TMI.4210.1109/TMI.2018.2791721
        • Li H.
        • Li A.
        • Wang M.
        A novel end-to-end brain tumor segmentation method using improved fully convolutional networks.
        Comput Biol Med. 2019; 108: 150-160https://doi.org/10.1016/j.compbiomed.2019.03.014
        • Sun J.
        • Chen W.
        • Peng S.
        • Liu B.
        DRRNet: Dense residual refine networks for automatic brain tumor segmentation.
        J Med Syst. Jun. 2019; 43: 221https://doi.org/10.1007/s10916-019-1358-6
        • Deng W.
        • Shi Q.
        • Luo K.
        • Yang Y.
        • Ning N.
        Brain tumor segmentation based on improved convolutional neural network in combination with non-quantifiable local texture feature.
        J Med Syst. 2019; 43: 152https://doi.org/10.1007/s10916-019-1289-2
        • Pereira S.
        • Pinto A.
        • Amorim J.
        • Ribeiro A.
        • Alves V.
        • Silva C.A.
        Adaptive feature recombination and recalibration for semantic segmentation with fully convolutional networks.
        IEEE Trans Med Imaging. 2019; 38: 2914-2925https://doi.org/10.1109/TMI.2019.2918096
        • Perkuhn M.
        • Stavrinou P.
        • Thiele F.
        • Shakirin G.
        • Mohan M.
        • Garmpis D.
        • et al.
        Clinical evaluation of a multiparametric deep learning model for glioblastoma segmentation using heterogeneous magnetic resonance imaging data from clinical routine.
        Invest Radiol. 2018; 53: 647-654https://doi.org/10.1097/RLI.0000000000000484
        • Tang F.
        • Liang S.
        • Zhong T.
        • Huang X.
        • Deng X.
        • Zhang Y.u.
        • et al.
        Postoperative glioma segmentation in CT image using deep feature fusion model guided by multi-sequence MRIs.
        Eur Radiol. 2020; 30: 823-832https://doi.org/10.1007/s00330-019-06441-z
        • Geetha A.
        • Gomathi N.
        A robust grey wolf-based deep learning for brain tumour detection in MR images.
        Biomed Tech (Berl). 2020; 65: 191-207https://doi.org/10.1515/bmt-2018-0244
        • Thillaikkarasi R.
        • Saravanan S.
        An enhancement of deep learning algorithm for brain tumor segmentation using kernel based CNN with M-SVM.
        J Med Syst. 2019; 43: 84https://doi.org/10.1007/s10916-019-1223-7
        • Stember J.N.
        • Celik H.
        • Krupinski E.
        • Chang P.D.
        • Mutasa S.
        • Wood B.J.
        • et al.
        Eye tracking for deep learning segmentation using convolutional neural networks.
        J Digit Imaging. 2019; 32: 597-604https://doi.org/10.1007/s10278-019-00220-4
        • Wong K.C.L.
        • Syeda-Mahmood T.
        • Moradi M.
        Building medical image classifiers with very limited data using segmentation networks.
        Med Image Anal. 2018; 49: 105-116https://doi.org/10.1016/j.media.2018.07.010
        • Gulani V.
        • Calamante F.
        • Shellock F.G.
        • Kanal E.
        • Reeder S.B.
        Gadolinium deposition in the brain: summary of evidence and recommendations.
        The Lancet Neurology. 2017; 16: 564-570https://doi.org/10.1016/S1474-4422(17)30158-8
        • Klein A.
        • Ghosh S.S.
        • Avants B.
        • Yeo B.T.T.
        • Fischl B.
        • Ardekani B.
        • et al.
        Evaluation of volume-based and surface-based brain image registration methods.
        Neuroimage. 2010; 51: 214-220https://doi.org/10.1016/j.neuroimage.2010.01.091
        • de Boer R.
        • Vrooman H.A.
        • Ikram M.A.
        • Vernooij M.W.
        • Breteler M.M.B.
        • van der Lugt A.
        • et al.
        Accuracy and reproducibility study of automatic MRI brain tissue segmentation methods.
        Neuroimage. 2010; 51: 1047-1056https://doi.org/10.1016/j.neuroimage.2010.03.012
        • Shattuck D.W.
        • Mirza M.
        • Adisetiyo V.
        • Hojatkashani C.
        • Salamon G.
        • Narr K.L.
        • et al.
        Construction of a 3D probabilistic atlas of human cortical structures.
        Neuroimage. 2008; 39: 1064-1080https://doi.org/10.1016/j.neuroimage.2007.09.031
        • Puccio B.
        • Pooley J.P.
        • Pellman J.S.
        • Taverna E.C.
        • Craddock R.C.
        The preprocessed connectomes project repository of manually corrected skull-stripped T1-weighted anatomical MRI data.
        GigaScience. 2016; 5: 45https://doi.org/10.1186/s13742-016-0150-5
        • Souza R.
        • Lucena O.
        • Garrafa J.
        • Gobbi D.
        • Saluzzi M.
        • Appenzeller S.
        • et al.
        An open, multi-vendor, multi-field-strength brain MR dataset and analysis of publicly available skull stripping methods agreement.
        Neuroimage. 2018; 170: 482-494https://doi.org/10.1016/j.neuroimage.2017.08.021
        • Visser M.
        • Müller D.M.J.
        • van Duijn R.J.M.
        • Smits M.
        • Verburg N.
        • Hendriks E.J.
        • et al.
        Inter-rater agreement in glioma segmentations on longitudinal MRI.
        Neuroimage Clin. 2019; 22: 101727https://doi.org/10.1016/j.nicl.2019.101727
        • Bartel F.
        • van Herk M.
        • Vrenken H.
        • Vandaele F.
        • Sunaert S.
        • de Jaeger K.
        • et al.
        Inter-observer variation of hippocampus delineation in hippocampal avoidance prophylactic cranial irradiation.
        Clin Transl Oncol. 2019; 21: 178-186https://doi.org/10.1007/s12094-018-1903-7
      5. D. B. Eekers et al., The EPTN consensus-based atlas for CT- and MR-based contouring in neuro-oncology, Radiother Oncol, vol. 128, no. 1, pp. 37–43, 2018, doi: 10.1016/j.radonc.2017.12.013.

        • Banzato T.
        • Causin F.
        • Della Puppa A.
        • Cester G.
        • Mazzai L.
        • Zotti A.
        Accuracy of deep learning to differentiate the histopathological grading of meningiomas on MR images: A preliminary study.
        J Magn Reson Imaging. 2019; 50: 1152-1159https://doi.org/10.1002/jmri.26723
        • Deepak S.
        • Ameer P.M.
        Brain tumor classification using deep CNN features via transfer learning.
        Comput Biol Med. 2019; 111: 103345https://doi.org/10.1016/j.compbiomed.2019.103345
        • Jun Y.
        • Eo T.
        • Kim T.
        • Shin H.
        • Hwang D.
        • Bae S.H.
        • et al.
        Deep-learned 3D black-blood imaging using automatic labelling technique and 3D convolutional neural networks for detecting metastatic brain tumors.
        Sci Rep. 2018; 8https://doi.org/10.1038/s41598-018-27742-1
        • Swati Z.N.K.
        • Zhao Q.
        • Kabir M.
        • Ali F.
        • Ali Z.
        • Ahmed S.
        • et al.
        Brain tumor classification for MR images using transfer learning and fine-tuning.
        Comput Med Imaging Graph. 2019; 75: 34-46https://doi.org/10.1016/j.compmedimag.2019.05.001
        • Zhu Y.
        • Man C.
        • Gong L.
        • Dong D.i.
        • Yu X.
        • Wang S.
        • et al.
        A deep learning radiomics model for preoperative grading in meningioma.
        Eur J Radiol. 2019; 116: 128-134https://doi.org/10.1016/j.ejrad.2019.04.022
      6. S. Maki et al., A Deep Convolutional Neural Network With Performance Comparable to Radiologists for Differentiating Between Spinal Schwannoma and Meningioma, Spine (Phila Pa 1976), vol. 45, no. 10, pp. 694–700, May 2020, doi: 10.1097/BRS.0000000000003353.

      7. Z. Li, Y. Wang, J. Yu, Y. Guo, and W. Cao, Deep Learning based Radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma, Sci Rep, vol. 7, no. 1, p. 5467, 14 2017, doi: 10.1038/s41598-017-05848-2.

        • Atici M.A.
        • Sagiroglu S.
        • Celtikci P.
        • Ucar M.
        • Borcek A.O.
        • Emmez H.
        • et al.
        A novel deep learning algorithm for the automatic detection of high-grade gliomas on T2-weighted magnetic resonance images: A preliminary machine learning study.
        Turk Neurosurg. 2019; https://doi.org/10.5137/1019-5149.JTN.27106-19.2
        • Laukamp K.R.
        • Thiele F.
        • Shakirin G.
        • Zopfs D.
        • Faymonville A.
        • Timmer M.
        • et al.
        Fully automated detection and segmentation of meningiomas using deep learning on routine multiparametric MRI.
        Eur Radiol. 2019; 29: 124-132https://doi.org/10.1007/s00330-018-5595-8
        • Peeken J.C.
        • Molina-Romero M.
        • Diehl C.
        • Menze B.H.
        • Straube C.
        • Meyer B.
        • et al.
        Deep learning derived tumor infiltration maps for personalized target definition in Glioblastoma radiotherapy.
        Radiother Oncol. 2019; 138: 166-172https://doi.org/10.1016/j.radonc.2019.06.031
        • Sert E.
        • Özyurt F.
        • Doğantekin A.
        A new approach for brain tumor diagnosis system: Single image super resolution based maximum fuzzy entropy segmentation and convolutional neural network.
        Med Hypotheses. 2019; 133: 109413https://doi.org/10.1016/j.mehy.2019.109413
        • Zhou Z.
        • Sanders J.W.
        • Johnson J.M.
        • Gule-Monroe M.K.
        • Chen M.M.
        • Briere T.M.
        • et al.
        Computer-aided Detection of Brain metastases in T1-weighted MRI for stereotactic radiosurgery using deep learning single-shot detectors.
        Radiology. 2020; 295: 407-415https://doi.org/10.1148/radiol.2020191479
        • Kristensen B.W.
        • Priesterbach-Ackley L.P.
        • Petersen J.K.
        • Wesseling P.
        Molecular pathology of tumors of the central nervous system.
        Ann Oncol. 01 2019,; 30: 1265-1278https://doi.org/10.1093/annonc/mdz164
        • Louis D.N.
        • Perry A.
        • Burger P.
        • Ellison D.W.
        • Reifenberger G.
        • von Deimling A.
        • et al.
        International Society Of Neuropathology-Haarlem consensus guidelines for nervous system tumor classification and grading.
        Brain Pathol. 2014; 24: 429-435https://doi.org/10.1111/bpa.12171
      8. K. Chang et al., Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement, Neuro Oncol, vol. 21, no. 11, pp. 1412–1422, 04 2019, doi: 10.1093/neuonc/noz106.

      9. J. Lao et al., A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme, Sci Rep, vol. 7, no. 1, p. 10353, 04 2017, doi: 10.1038/s41598-017-10649-8.

        • Han W.
        • Qin L.
        • Bay C.
        • Chen X.
        • Yu K.-H.
        • Miskin N.
        • et al.
        Deep transfer learning and radiomics feature prediction of survival of patients with high-grade gliomas.
        AJNR Am J Neuroradiol. 2020; 41: 40-48https://doi.org/10.3174/ajnr.A6365
        • Bacchi S.
        • Zerner T.
        • Dongas J.
        • Asahina A.T.
        • Abou-Hamden A.
        • Otto S.
        • et al.
        Deep learning in the detection of high-grade glioma recurrence using multiple MRI sequences: A pilot study.
        J Clin Neurosci. 2019; 70: 11-13https://doi.org/10.1016/j.jocn.2019.10.003
        • Nie D.
        • Zhang H.
        • Adeli E.
        • Liu L.
        • Shen D.
        3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients.
        Med Image Comput Comput Assist Interv. 2016; 9901: 212-220https://doi.org/10.1007/978-3-319-46723-8_25
      10. D. Nie et al., Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages, Sci Rep, vol. 9, no. 1, p. 1103, 31 2019, doi: 10.1038/s41598-018-37387-9.

        • Sullivan R.
        • Peppercorn J.
        • Sikora K.
        • Zalcberg J.
        • Meropol N.J.
        • Amir E.
        • et al.
        Delivering affordable cancer care in high-income countries.
        Lancet Oncol. 2011; 12: 933-980https://doi.org/10.1016/S1470-2045(11)70141-3
        • Czeizler E.
        • Wiessler W.
        • Koester T.
        • Hakala M.
        • Basiri S.
        • Jordan P.
        • et al.
        Using federated data sources and Varian Learning Portal framework to train a neural network model for automatic organ segmentation.
        Physica Med. 2020; 72: 39-45https://doi.org/10.1016/j.ejmp.2020.03.011
        • Vandewinckele L.
        • Claessens M.
        • Dinkla A.
        • Brouwer C.
        • Crijns W.
        • Verellen D.
        • et al.
        Overview of artificial intelligence-based applications in radiotherapy: Recommendations for implementation and quality assurance.
        Radiother Oncol. 2020; 153: 55-66https://doi.org/10.1016/j.radonc.2020.09.008
        • Makale M.T.
        • McDonald C.R.
        • Hattangadi-Gluth J.A.
        • Kesari S.
        Mechanisms of radiotherapy-associated cognitive disability in patients with brain tumours.
        Nat Rev Neurol. Jan. 2017; 13: 52-64https://doi.org/10.1038/nrneurol.2016.185
        • Winder A.
        • d’Esterre C.D.
        • Menon B.K.
        • Fiehler J.
        • Forkert N.D.
        Automatic arterial input function selection in CT and MR perfusion datasets using deep convolutional neural networks.
        Med Phys. 2020; 47: 4199-4211https://doi.org/10.1002/mp.v47.910.1002/mp.14351
        • Nalepa J.
        • Ribalta Lorenzo P.
        • Marcinkiewicz M.
        • Bobek-Billewicz B.
        • Wawrzyniak P.
        • Walczak M.
        • et al.
        Fully-automated deep learning-powered system for DCE-MRI analysis of brain tumors.
        Artif Intell Med. 2020; 102: 101769https://doi.org/10.1016/j.artmed.2019.101769
      11. J. E. Park et al., Diffusion and perfusion MRI radiomics obtained from deep learning segmentation provides reproducible and comparable diagnostic model to human in post-treatment glioblastoma, Eur Radiol, Oct. 2020, doi: 10.1007/s00330-020-07414-3.

        • Shortliffe E.H.
        • Sepúlveda M.J.
        Clinical decision support in the era of artificial intelligence.
        JAMA. Dec. 2018; 320: 2199-2200https://doi.org/10.1001/jama.2018.17163
        • Rigby M.J.
        Ethical dimensions of using artificial intelligence in health care.
        AMA Journal of Ethics. Feb. 2019; 21: 121-124https://doi.org/10.1001/amajethics.2019.121