Highlights
- •Proposed unified framework for glioma survival classification & visual interpretation.
- •Proposed an integrated modality-specific and modality-concatenated system.
- •Presented in-depth analysis of designing the classification network.
- •Generation of attention maps over different convolutional levels & non-predicted labels.
Abstract
Purpose
Methods and materials
Results
Conclusions
Graphical abstract

Keywords
1. Introduction
- Naser M.A.
- Deen M.J.
- Aurna N.F.
- Yousuf M.A.
- Taher K.A.
- Azad A.K.M.
- Moni M.A.
- •An end-to-end unified framework for overall survival classification and visual interpretation is introduced.
- •An integrated modality-specific and modality-concatenated system is proposed which incorporate the benefits of both. A modality-specific pathway is adopted for each MRI modality to independently acquire the full characterization of important regions. For including intra-modality correlations, a modality-concatenated pathway is also used.
- •Modifications in the existing multi-path and single-path classification models are also highlighted.
- •An in-depth analysis of designing and evaluation of the classification network is also presented.
- •Interpretation and the validation of the classification outcomes by generating attention maps over different convolutional levels and non-predicted labels.
- •This is the first study that provides visual interpretability of the overall survival classification model, outperforming the existing classification approaches significantly.
2. Related work
Author | Approach | Type | Deep Learning Architecture | Dataset & Performance metrics | Pros/Cons |
---|---|---|---|---|---|
Bhadani et al. [17] | Feature-based | 3D | __ | Local dataset 29 patients. Accuracy: 68.4% | Utilized only volume- based features, extremely small size dataset. |
Lao et al. [18] | Feature-based + Feature-learned | 2D | Pretrained CNN | Local dataset 112 patients C-index = 0.710 | Less exploration of better deep learning architectures and evaluation criteria. |
Huang et al. [14] | Feature-based + Feature-learned | 3D | Single path CNN | BraTS 2019, BraTS 2020 RMSE 311.5 | Could not explore the ways to overcome the loss of spatial content because of 3D max-pooling. |
Pei et al. [19] | Feature-learned | 3D | Single path CNN | BraTS 2019 Accuracy: 58% | No modifications presented in the CNN for achieving better classification outcomes. |
Banerjee et al. [20] | Feature-based + Feature-learned | 2D | Multilayer Perceptron with 2 hidden layers | BraTS 2018 Accuracy: 58%, MSE: 180959.4 | Low model performance in comparison to other 2D methods. |
Nie et al. [21]
Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages. Sci Rep. 2019; 9https://doi.org/10.1038/s41598-018-37387-9 | Feature-learned + SVM | 3D | Four path CNN | Local dataset 29 patients. Accuracy: 90.66% | Achieved better classification results. |
Puybareau et al. [13] | Feature-based | 2D | __ | BraTS 2018 Accuracy: 61% | Focused only on extracting brain tumor location and its size |
Fu et al. [22] | Feature-learned | 2D | Dual path CNN | BraTS 2018 Accuracy: 94% | Better accuracies obtained but could not explore the shortcomings of the dual-path CNN models. |
Kao et al. [23] | Feature-based | 3D | __ | BraTS 2018 Accuracy: 70% | Utilized a different approach by extracting tractographic features. |
Mossa et al. [24] | Feature-learned | 2D | Ensemble of six CNNs | BraTS 2017 Accuracy: 92.9% | Computationally expensive approach. |
3. Proposed method

3.1 Pre-processing modules and glioma detection

3.2 Overall survival classification pipeline
3.2.1 Modality-specific pathway
3.2.2 Modality-concatenated pathway
- Malhotra R.
- Saini B.S.
- Gupta S.
3.3 Visual interpretability pipeline
Algorithm 1: |
---|
Visual Interpretability: |
C_output = Activation maps of the last convolutional layer to be visualized. |
G_Cam = Array of dimension C_output[0:2] |
PredT = A tensor of prediction probabilities for all classes |
T = index of the maximum probability in PredT |
O_neuron = PredT [:, T] #output value of the top predicted class for the Input |
For generating heatmaps of short survivor class: |
O_neuron = PredT [:, 0] #extracted output value related to class 1 |
For generating heatmaps of mid survivor class: |
O_neuron = PredT [:, 1] #extracted output value related to class 2 |
For generating heatmaps of long survivor class: |
O_neuron = PredT [:, 2] #extracted output value related to class 3 |
C_grad = Gradients (O_neuron, C_output) #gradient calculation of the top predicted class with respect to C_output |
W_grad = Mean (C_grad, axis = (0,1)) #averaging the gradients spatially |
For Loop: |
For index in W_grad do |
G_Cam += W_grad[index] * C_output[index] #element wise multiplication and weighted sum of the multiplication |
End For loop |
G_Cam = resize(G_Cam, (160,160)) #resizing heatmaps to the image dimension |
G_Cam = ReLU (G_Cam) #eleminating values below 0 |
G_Cam = (G_Cam-G_Cam.min())/(G_Cam.max()-G_Cam.min()) #min–max normalization |
Hcam = Visualize(G_Cam) #visualizing heatmaps |
4. Experiments and results
4.1 Data and implementation details


4.2 Evaluation metrics
4.3 Result analysis
4.3.1 Ablation study for designing the network architecture for overall survival classification
Design | Accuracy | Sensitivity | Specificity | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | ||
Single path | FL | 0.870 | 0.901 | 0.888 | 0.886 | 0.845 | 0.905 | 0.854 | 0.861 | 0.882 | 0.923 | 0.851 | 0.885 |
T1-CE | 0.836 | 0.830 | 0.829 | 0.832 | 0.841 | 0.835 | 0.832 | 0.836 | 0.846 | 0.831 | 0.833 | 0.837 | |
T2 | 0.810 | 0.800 | 0.808 | 0.806 | 0.821 | 0.819 | 0.812 | 0.817 | 0.814 | 0.813 | 0.821 | 0.816 | |
T1 | 0.721 | 0.710 | 0.725 | 0.719 | 0.730 | 0.725 | 0.711 | 0.722 | 0.723 | 0.714 | 0.722 | 0.720 | |
Dual path | FL + T1-CE | 0.895 | 0.888 | 0.893 | 0.892 | 0.895 | 0.900 | 0.901 | 0.899 | 0.902 | 0.891 | 0.892 | 0.895 |
FL + T1 | 0.892 | 0.880 | 0.891 | 0.888 | 0.896 | 0.871 | 0.883 | 0.883 | 0.890 | 0.882 | 0.888 | 0.887 | |
T1 + T2 | 0.872 | 0.868 | 0.870 | 0.870 | 0.886 | 0.861 | 0.864 | 0.870 | 0.873 | 0.869 | 0.870 | 0.871 | |
FL + T2 | 0.879 | 0.860 | 0.869 | 0.869 | 0.874 | 0.860 | 0.861 | 0.865 | 0.866 | 0.862 | 0.870 | 0.866 | |
T1-CE + T2 | 0.851 | 0.842 | 0.846 | 0.846 | 0.859 | 0.855 | 0.841 | 0.852 | 0.840 | 0.851 | 0.848 | 0.846 | |
T1 + T1-CE | 0.833 | 0.829 | 0.828 | 0.830 | 0.830 | 0.828 | 0.831 | 0.830 | 0.833 | 0.820 | 0.819 | 0.824 | |
Three path | FL + T1-CE + T2 | 0.950 | 0.942 | 0.943 | 0.945 | 0.945 | 0.938 | 0.951 | 0.945 | 0.933 | 0.924 | 0.941 | 0.933 |
T1 + T1-CE + T2 | 0.932 | 0.933 | 0.933 | 0.933 | 0.927 | 0.920 | 0.932 | 0.926 | 0.937 | 0.929 | 0.933 | 0.933 | |
FL + T1 + T2 | 0.933 | 0.921 | 0.922 | 0.925 | 0.930 | 0.930 | 0.929 | 0.930 | 0.921 | 0.928 | 0.930 | 0.926 | |
FL + T1-CE + T1 | 0.911 | 0.922 | 0.919 | 0.917 | 0.910 | 0.924 | 0.912 | 0.915 | 0.911 | 0.900 | 0.909 | 0.907 | |
Four path | T1 + T2+T1-CE + FL | 0.949 | 0.937 | 0.931 | 0.939 | 0.933 | 0.921 | 0.939 | 0.931 | 0.931 | 0.926 | 0.931 | 0.929 |
Modality concatenated | FL + T2+T1-CE | 0.911 | 0.894 | 0.901 | 0.902 | 0.951 | 0.882 | 0.851 | 0.894 | 0.900 | 0.891 | 0.911 | 0.901 |
FL + T2+T1-CE + T1 | 0.920 | 0.919 | 0.922 | 0.920 | 0.919 | 0.899 | 0.879 | 0.899 | 0.899 | 0.912 | 0.895 | 0.902 | |
Combined network | 0.979 | 0.971 | 0.976 | 0.975 | 0.999 | 0.934 | 0.969 | 0.967 | 0.988 | 0.957 | 0.971 | 0.972 | |
Combined network with image vector (Proposed) | 0.999 | 0.997 | 0.998 | 0.998 | 0.996 | 0.998 | 0.999 | 0.997 | 0.999 | 0.998 | 0.997 | 0.999 |
Design | Accuracy | Sensitivity | Specificity | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | ||
Single path | FL | 0.860 | 0.880 | 0.855 | 0.875 | 0.832 | 0.885 | 0.842 | 0.853 | 0.843 | 0.890 | 0.825 | 0.885 |
T1-CE | 0.829 | 0.831 | 0.819 | 0.826 | 0.829 | 0.831 | 0.830 | 0.830 | 0.826 | 0.821 | 0.822 | 0.823 | |
T2 | 0.800 | 0.781 | 0.789 | 0.790 | 0.811 | 0.810 | 0.802 | 0.808 | 0.816 | 0.801 | 0.791 | 0.803 | |
T1 | 0.720 | 0.700 | 0.715 | 0.712 | 0.722 | 0.721 | 0.700 | 0.714 | 0.711 | 0.709 | 0.729 | 0.716 | |
Dual path | FL+T1-CE | 0.885 | 0.869 | 0.873 | 0.876 | 0.865 | 0.899 | 0.899 | 0.888 | 0.909 | 0.871 | 0.873 | 0.884 |
FL+T1 | 0.891 | 0.870 | 0.880 | 0.880 | 0.886 | 0.869 | 0.880 | 0.878 | 0.889 | 0.880 | 0.878 | 0.882 | |
T1+T2 | 0.870 | 0.848 | 0.867 | 0.862 | 0.866 | 0.871 | 0.853 | 0.863 | 0.859 | 0.849 | 0.861 | 0.856 | |
FL+T2 | 0.869 | 0.850 | 0.848 | 0.856 | 0.863 | 0.859 | 0.851 | 0.858 | 0.866 | 0.862 | 0.870 | 0.866 | |
T1-CE+T2 | 0.851 | 0.842 | 0.846 | 0.846 | 0.859 | 0.855 | 0.841 | 0.852 | 0.840 | 0.844 | 0.847 | 0.844 | |
T1+T1-CE | 0.834 | 0.818 | 0.827 | 0.826 | 0.826 | 0.821 | 0.825 | 0.824 | 0.823 | 0.811 | 0.809 | 0.814 | |
Three path | FL+T1-CE+T2 | 0.949 | 0.941 | 0.933 | 0.941 | 0.933 | 0.937 | 0.933 | 0.943 | 0.930 | 0.913 | 0.939 | 0.927 |
T1+T1-CE+T2 | 0.922 | 0.933 | 0.927 | 0.927 | 0.911 | 0.916 | 0.929 | 0.919 | 0.920 | 0.919 | 0.921 | 0.920 | |
FL+T1+T2 | 0.935 | 0.911 | 0.919 | 0.922 | 0.921 | 0.924 | 0.921 | 0.922 | 0.901 | 0.914 | 0.920 | 0.912 | |
FL+T1-CE+T1 | 0.900 | 0.920 | 0.909 | 0.910 | 0.911 | 0.921 | 0.902 | 0.911 | 0.900 | 0.890 | 0.900 | 0.897 | |
Four path | T1+T2+T1-CE+FL | 0.927 | 0.920 | 0.915 | 0.921 | 0.925 | 0.900 | 0.911 | 0.912 | 0.919 | 0.905 | 0.915 | 0.913 |
Modality concatenated | FL+T2+T1-CE | 0.905 | 0.891 | 0.891 | 0.895 | 0.923 | 0.889 | 0.831 | 0.881 | 0.899 | 0.887 | 0.900 | 0.895 |
FL+T2+T1-CE+T1 | 0.909 | 0.900 | 0.860 | 0.906 | 0.900 | 0.877 | 0.889 | 0.899 | 0.879 | 0.892 | 0.851 | 0.887 | |
Combined network | 0.964 | 0.968 | 0.955 | 0.945 | 0.991 | 0.914 | 0.951 | 0.952 | 0.953 | 0.939 | 0.962 | 0.959 | |
Combined network with image vector (Proposed) | 0.987 | 0.982 | 0.998 | 0.989 | 0.997 | 0.988 | 0.988 | 0.997 | 0.999 | 0.997 | 0.987 | 0.999 |
Design | Accuracy | Sensitivity | Specificity | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | ||
Single path | FL | 0.879 | 0.911 | 0.881 | 0.890 | 0.860 | 0.909 | 0.842 | 0.870 | 0.881 | 0.941 | 0.861 | 0.894 |
T1-CE | 0.836 | 0.830 | 0.829 | 0.832 | 0.841 | 0.835 | 0.832 | 0.836 | 0.846 | 0.831 | 0.833 | 0.837 | |
T2 | 0.819 | 0.822 | 0.801 | 0.814 | 0.827 | 0.819 | 0.811 | 0.819 | 0.824 | 0.810 | 0.833 | 0.822 | |
T1 | 0.738 | 0.700 | 0.714 | 0.717 | 0.740 | 0.729 | 0.731 | 0.733 | 0.729 | 0.716 | 0.721 | 0.722 | |
Dual path | FL+T1-CE | 0.931 | 0.897 | 0.911 | 0.913 | 0.891 | 0.909 | 0.899 | 0.900 | 0.912 | 0.911 | 0.902 | 0.908 |
FL+T1 | 0.922 | 0.889 | 0.911 | 0.907 | 0.899 | 0.888 | 0.893 | 0.893 | 0.911 | 0.913 | 0.881 | 0.902 | |
T1+T2 | 0.871 | 0.870 | 0.880 | 0.874 | 0.899 | 0.871 | 0.869 | 0.880 | 0.867 | 0.889 | 0.871 | 0.876 | |
FL+T2 | 0.871 | 0.861 | 0.878 | 0.870 | 0.878 | 0.868 | 0.862 | 0.869 | 0.876 | 0.869 | 0.880 | 0.875 | |
T1-CE+T2 | 0.865 | 0.839 | 0.844 | 0.849 | 0.853 | 0.865 | 0.832 | 0.850 | 0.830 | 0.860 | 0.857 | 0.849 | |
T1+T1-CE | 0.844 | 0.821 | 0.822 | 0.829 | 0.839 | 0.829 | 0.838 | 0.835 | 0.843 | 0.829 | 0.810 | 0.827 | |
Three path | FL+T1-CE+T2 | 0.966 | 0.941 | 0.953 | 0.953 | 0.955 | 0.942 | 0.959 | 0.952 | 0.923 | 0.901 | 0.921 | 0.915 |
T1+T1-CE+T2 | 0.952 | 0.932 | 0.949 | 0.944 | 0.936 | 0.921 | 0.922 | 0.926 | 0.941 | 0.923 | 0.923 | 0.929 | |
FL+T1+T2 | 0.939 | 0.934 | 0.920 | 0.931 | 0.938 | 0.944 | 0.939 | 0.940 | 0.926 | 0.918 | 0.911 | 0.918 | |
FL+T1-CE+T1 | 0.913 | 0.916 | 0.900 | 0.910 | 0.923 | 0.929 | 0.922 | 0.925 | 0.929 | 0.901 | 0.903 | 0.911 | |
Four path | T1+T2+T1-CE+FL | 0.955 | 0.927 | 0.951 | 0.944 | 0.942 | 0.919 | 0.903 | 0.921 | 0.930 | 0.937 | 0.937 | 0.935 |
Modality concatenated | FL+T2+T1-CE | 0.933 | 0.911 | 0.912 | 0.918 | 0.962 | 0.909 | 0.861 | 0.910 | 0.908 | 0.899 | 0.918 | 0.908 |
FL+T2+T1-CE+T1 | 0.946 | 0.923 | 0.930 | 0.933 | 0.929 | 0.912 | 0.903 | 0.914 | 0.909 | 0.911 | 0.887 | 0.932 | |
Combined network | 0.988 | 0.979 | 0.988 | 0.978 | 0.999 | 0.953 | 0.956 | 0.969 | 0.988 | 0.955 | 0.988 | 0.977 | |
Combined network with image vector (Proposed) | 1.000 | 1.000 | 1.000 | 1.000 | 0.999 | 0.998 | 0.999 | 0.999 | 0.999 | 0.998 | 0.997 | 0.999 |
Design | Accuracy | Sensitivity | Specificity | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | Class 1 | Class 2 | Class 3 | Overall | ||
Single path | FL | 0.861 | 0.889 | 0.871 | 0.866 | 0.850 | 0.892 | 0.855 | 0.877 | 0.871 | 0.927 | 0.852 | 0.883 |
T1-CE | 0.830 | 0.821 | 0.828 | 0.833 | 0.844 | 0.825 | 0.831 | 0.826 | 0.822 | 0.833 | 0.831 | 0.829 | |
T2 | 0.799 | 0.811 | 0.791 | 0.820 | 0.821 | 0.820 | 0.819 | 0.800 | 0.813 | 0.800 | 0.829 | 0.814 | |
T1 | 0.721 | 0.705 | 0.701 | 0.726 | 0.727 | 0.719 | 0.732 | 0.709 | 0.719 | 0.722 | 0.721 | 0.721 | |
Dual path | FL+T1-CE | 0.926 | 0.877 | 0.884 | 0.895 | 0.861 | 0.889 | 0.893 | 0.886 | 0.902 | 0.910 | 0.890 | 0.901 |
FL+T1 | 0.891 | 0.876 | 0.901 | 0.889 | 0.896 | 0.871 | 0.883 | 0.897 | 0.901 | 0.904 | 0.891 | 0.899 | |
T1+T2 | 0.866 | 0.871 | 0.870 | 0.867 | 0.881 | 0.872 | 0.849 | 0.869 | 0.861 | 0.872 | 0.866 | 0.866 | |
FL+T2 | 0.855 | 0.860 | 0.871 | 0.870 | 0.870 | 0.862 | 0.877 | 0.862 | 0.871 | 0.861 | 0.866 | 0.866 | |
T1-CE+T2 | 0.862 | 0.831 | 0.832 | 0.842 | 0.844 | 0.861 | 0.822 | 0.842 | 0.830 | 0.855 | 0.851 | 0.845 | |
T1+T1-CE | 0.831 | 0.811 | 0.821 | 0.827 | 0.819 | 0.830 | 0.831 | 0.821 | 0.833 | 0.811 | 0.816 | 0.820 | |
Three path | FL+T1-CE+T2 | 0.955 | 0.939 | 0.956 | 0.945 | 0.931 | 0.932 | 0.951 | 0.944 | 0.911 | 0.899 | 0.915 | 0.908 |
T1+T1-CE+T2 | 0.951 | 0.911 | 0.929 | 0.921 | 0.922 | 0.920 | 0.922 | 0.930 | 0.928 | 0.922 | 0.910 | 0.920 | |
FL+T1+T2 | 0.929 | 0.931 | 0.920 | 0.930 | 0.928 | 0.941 | 0.921 | 0.927 | 0.911 | 0.911 | 0.900 | 0.907 | |
FL+T1-CE+T1 | 0.895 | 0.906 | 0.901 | 0.909 | 0.905 | 0.921 | 0.900 | 0.901 | 0.921 | 0.888 | 0.889 | 0.899 | |
Four path | T1+T2+T1-CE+FL | 0.944 | 0.913 | 0.930 | 0.929 | 0.941 | 0.912 | 0.899 | 0.917 | 0.930 | 0.931 | 0.933 | 0.931 |
Modality concatenated | FL+T2+T1-CE | 0.927 | 0.910 | 0.908 | 0.915 | 0.958 | 0.899 | 0.849 | 0.902 | 0.901 | 0.899 | 0.911 | 0.903 |
FL+T2+T1-CE+T1 | 0.939 | 0.921 | 0.926 | 0.928 | 0.924 | 0.900 | 0.901 | 0.908 | 0.900 | 0.899 | 0.921 | 0.916 | |
Combined network | 0.951 | 0.969 | 0.989 | 0.970 | 0.981 | 0.977 | 0.926 | 0.967 | 0.951 | 0.944 | 0.987 | 0.964 | |
Combined network with image vector (Proposed) | 0.998 | 0.997 | 0.997 | 0.997 | 0.999 | 0.981 | 0.999 | 0.992 | 0.999 | 0.998 | 0.997 | 0.999 |


4.3.2 Attention of classification model over convolutional layers


4.3.3 Attention of classification model over non-predicted labels

4.3.4 Quantitative comparison with state-of-the-art methods
4.3.4.1 Comparison to the State-of-the-Arts (BraTS 2018)
Modalities | Methods | Approach | Type | Accuracy |
---|---|---|---|---|
T2 + FL + CE | Puybareau et al. [13] * | Feature-based | 2D | 0.61 |
T1 + T2 + FL + CE | Sun et al. [32] * | Feature-based | 2D | 0.61 |
T1 + FL + CE | Cabez as et al. [33] * | Pretrained VGG + clinical + volume features. | 3D | 0.37 |
T1 + T2 + FL + CE | Feng et al. [35] * | Feature-based | 3D | 0.61 |
T1 + T2 + FL + CE | Zhou et al. [39] * | Multi-path CNN | 2D | 0.67 |
T1 + T2 + FL + CE | Huang et al. [14] # | CNN + Feature-based | 3D | 0.69 |
T2 + CE | Guo et al. [36] # | CNN | 3D | 0.59 |
T1 + FL + CE | Pie et al. [19] # | CNN | 3D | 0.59 |
T1 + T2 + FL + CE | Amian et al. [37] # | Feature-based | 3D | 0.52 |
T1 + T2 + FL + CE | Yogananda et al. [38] # | Feature-based | 3D | 0.44 |
T1 + T2 + FL + CE | Proposed* | Integrated CNN | 2D | 0.99 |
T1 + T2 + FL + CE | Proposed# | Integrated CNN | 2D | 1.00 |
- Baid U.
- Rane S.U.
- Talbar S.
- Gupta S.
- Thakur M.H.
- Moiyadi A.
- et al.
4.3.4.2 Comparison to the State-of-the-Arts (BraTS 2019)
5. Conclusions
Declaration of Competing Interest
References
- Glioblastoma multiforme: A look inside its heterogeneous nature.Cancers. 2014; 6: 226-239https://doi.org/10.3390/cancers6010226
- Novel Radiomic Features Based on Joint Intensity Matrices for Predicting Glioblastoma Patient Survival Time.IEEE J Biomed Heal Informatics. 2019; 23: 795-804https://doi.org/10.1109/JBHI.2018.2825027
- Brain tumor segmentation and grading of lower-grade glioma using deep learning in MRI images.Comput Biol Med. 2020; 121103758https://doi.org/10.1016/j.compbiomed.2020.103758
- A deep learning approach for magnetic resonance fingerprinting: Scaling capabilities and good training practices investigated by simulations.Phys Medica. 2021; 89: 80-92
- Current applications of deep-learning in neuro-oncological MRI.Phys Medica. 2021; 83: 161-173
- A classification of MRI brain tumor based on two stage feature level ensemble of deep CNN models.Comput Biol Med. 2022; 146105539https://doi.org/10.1016/j.compbiomed.2022.105539
- Basic of machine learning and deep learning in imaging for medical physicists.Phys Medica. 2021; 83: 194-205https://doi.org/10.1016/j.ejmp.2021.03.026
- AI applications to medical images: From machine learning to deep learning.Phys Medica. 2021; 83: 9-24
- Artificial intelligence: Deep learning in oncological radiomics and challenges of interpretability and data harmonization.Phys Medica. 2021; 83: 108-121
- Explainability of deep neural networks for MRI analysis of brain tumors.Int J Comput Assist Radiol Surg. 2022; 1–11: 2022https://doi.org/10.1007/s11548-022-02619-x
- Explainable deep learning models in medical image analysis.Journal of Imaging. 2020; 6: 1-19https://doi.org/10.3390/JIMAGING6060052
Banerjee S, Mitra S, Shankar BU. Automated 3D segmentation of brain tumor using visual saliency. Inf. Sci. (Ny).2018;424:337–353. 10.1016/j.ins.2017.10.011.
- Segmentation of gliomas and prediction of patient overall survival: a simple and fast procedure.In International MICCAI Brainlesion Workshop. 2018; : 199-209
- Overall Survival Prediction for Gliomas Using a Novel Compound Approach.Front Oncol. 2021; 11: 1-20https://doi.org/10.3389/fonc.2021.724191
- On the interpretability of artificial intelligence in radiology: Challenges and opportunities. Radiology.Artif Intell. 2020; 2: e190043
- Visual interpretability in 3D brain tumor segmentation network.Comput Biol Med. 2021; 133: 1-11https://doi.org/10.1016/j.compbiomed.2021.104410
- Fuzzy volumetric delineation of brain tumor and survival prediction.Soft Comput. 2020; 24: 13115-13134https://doi.org/10.1007/s00500-020-04728-8
Lao J, et al. A Deep Learning-Based Radiomics Model for Prediction of Survival in Glioblastoma Multiforme. Sci. Rep.;7:1–8. 10.1038/s41598-017-10649-8.
- Context aware deep learning for brain tumor segmentation, subtype classification, and survival prediction using radiology images.Sci Rep. 2020; 10: 1-11https://doi.org/10.1038/s41598-020-74419-9
- Multi-planar spatial-ConvNet for segmentation and survival prediction in brain cancer.In International MICCAI Brainlesion Workshop. 2018; : 94-104
- Multi-Channel 3D Deep Feature Learning for Survival Time Prediction of Brain Tumor Patients Using Multi-Modal Neuroimages.Sci Rep. 2019; 9https://doi.org/10.1038/s41598-018-37387-9
- Survival prediction of patients suffering from glioblastoma based on two-branch DenseNet using multi-channel features.Int J Comput Assist Radiol Surg. 2021; 16: 207-217https://doi.org/10.1007/s11548-021-02313-4
- Brain tumor segmentation and tractographic feature extraction from structural MR images for overall survival prediction.In International MICCAI Brainlesion Workshop. 2018; : 128-141
- Ensemble learning of multiview CNN models for survival time prediction of brain tumor patients using multimodal MRI scans.Turkish J Electr Eng Comput Sci. 2021; 29: 616-631https://doi.org/10.3906/ELK-2002-175
BraTS 2018 Proceedings. https://www.cbica.upenn.edu/sbia/Spyridon.Bakas/MICCAI_BraTS/MICCAI_BraTS_2018_proceedings_shortPapers.pdf.
- Evaluation of tumor-derived MRI-texture features for discrimination of molecular subtypes and prediction of 12-month survival status in glioblastoma.Med Phys. 2015; 42: 6725-6735https://doi.org/10.1118/1.4934373
Tang W, Zhang H, Yu P, Kang H, Zhang R. MMMNA-Net for Overall Survival Time Prediction of Brain Tumor Patients. arXiv Prepr. arXiv2206.06267, 2022.
- A novel compound-based loss function for glioma segmentation with deep learning.Optik. 2022; 265169443https://doi.org/10.1016/j.ijleo.2022.169443
- Demystifying Brain Tumor Segmentation Networks: Interpretability and Uncertainty Analysis.Front Comput Neurosci. 2020; 14: 1-12https://doi.org/10.3389/fncom.2020.00006
- Brain tumor segmentation and survival prediction using multimodal MRI scans with deep learning.Front Neurosci. 2019; 13: 1-9
Cabezas M, et al. Survival prediction using ensemble tumor segmentation and transfer learning. arXiv Prepr. arXiv1810.04274, 2018.
- Overall Survival Prediction in Glioblastoma With Radiomic Features Using Machine Learning.Front Comput Neurosci. 2020; 14https://doi.org/10.3389/fncom.2020.00061
- Brain Tumor Segmentation Using an Ensemble of 3D U-Nets and Overall Survival Prediction Using Radiomic Features.Front Comput Neurosci. 2020; 14: 1-12https://doi.org/10.3389/fncom.2020.00025
- Domain knowledge based brain tumor segmentation and overall survival prediction.in: International MICCAI Brainlesion Workshop. 2019: 285-295
- Multi-resolution 3D CNN for MRI brain tumor segmentation and survival prediction.in: International MICCAI Brainlesion Workshop. 2019: 221-230
- Fully automated brain tumor segmentation and survival prediction of gliomas using deep learning and MRI.in: International MICCAI Brainlesion Workshop. 2019: 99-112
- Multi-modal Multi-channel Network for Overall Survival Time Prediction of Brain Tumor Patients.in: International Conference on Medical Image Computing and Computer-Assisted Intervention. 2020: 221-231