U.S. flag

An official website of the United States government

Display Settings:

Items per page

PMC Full-Text Search Results

Items: 1 to 20 of 115487

1.
Figure 6

Figure 6. From: Deep Learning Enables Optofluidic Zoom System with Large Zoom Ratio and High Imaging Resolution.

Comparison of image quality of resolution targets. (a) Image taken at f = 31.3 mm without deep learning. (b) Image processed at f = 31.3 mm by deep learning. (c) Image taken at f = 21.8 mm without deep learning. (d) Image processed at f = 21.8 mm by deep learning. (e) Image taken at f = 8.1 mm without deep learning. (f) Image processed at f = 8.1 mm by deep learning. (g) Image taken at f = 4.0 mm without deep learning. (h) Image processed at f = 4.0 mm by deep learning.

Jiancheng Xu, et al. Sensors (Basel). 2023 Mar;23(6):3204.
2.

Fig. 4. From: A Hybrid Deep Learning CNN model for COVID-19 detection from chest X-rays.

The confusion matrix of each transfer model (a) VGG16 model confusion matrix. (b) VGG19 model confusion matrix. (c) EfficientNet B0 model confusion matrix. (d) ResNet50 model confusion matrix. (e) Hybrid deep learning model (Max pooling) model confusion matrix. (f) Hybrid deep learning model (average pooling)-naive bayes confusion matrix. (g) Hybrid deep learning model (average pooling)-Random Forest confusion matrix. (h) Hybrid deep learning model (average pooling)-KNN confusion matrix. (i) Hybrid deep learning model (average pooling)-SVM (rbf) confusion matrix. (j) Hybrid deep learning model (average pooling)-SVM(sigmoid). (k) Hybrid deep learning model (average pooling)-SVM(linear). (l) Hybrid deep learning model (average pooling)-NN confusion matrix.

Mohan Abdullah, et al. Heliyon. 2024 Mar 15;10(5):e26938.
3.
Fig. 4

Fig. 4. From: Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis.

Fitted normal curves on accuracy distributions of the four networks. Normal curves are fitted on the accuracy distributions of the clinical deep learning network (blue), the structural connectivity deep learning network (orange), morphology deep learning network (purple) and the clinical-MRI combined deep learning network (gray), based on 16 randomly selected subjects (repeated 10,000 times) from the first evaluation set (n = 32). The mean accuracies (dashed lines) of these distributions were 68.7%, 62.5%, 62.4% and 84.4% for the clinical deep learning network, structural connectivity deep learning network, morphology deep learning network and clinical-MRI combined deep learning network, respectively. A paired t-test showed significant differences between accuracies of each pair of networks (all p < 0.001).

Hannelore K. van der Burgh, et al. Neuroimage Clin. 2017;13:361-369.
4.
Fig. 3

Fig. 3. Data conversion and deep learning for the estimation of VT from single-lead electrocardiography data.. From: Feasibility of the deep learning method for estimating the ventilatory threshold with electrocardiography data.

Schematic illustration of the pre-processing of electrocardiography and application of deep learning. ECG electrocardiography, VT ventilatory threshold, DLT deep learning threshold, DL deep learning, VO2 oxygen uptake, CPX cardiopulmonary exercise testing.

Kotaro Miura, et al. NPJ Digit Med. 2020;3:141.
5.
Fig. 2

Fig. 2. From: Deep learning classification of uveal melanoma based on histopathological images and identification of a novel indicator for prognosis of patients.

The deep learning (Google-net) model for patches prediction. A Probably heatmaps of alive and dead status at the stage of patches prediction. The color bars represent the vital status probability of each patch. B The accuracy curve of deep learning model training. C The confusion matrix of deep learning model. D The ROC curve and AUC value (RAUC) of deep learning model in TCGA-UVM cohort. E The Precision-Recall curve and AUC value (PAUC) of deep learning model in TCGA-UVM cohort. F The RAUC of deep learning model in HX cohort. G The PAUC of deep learning model in HX cohort

Qi Wan, et al. Biol Proced Online. 2023;25:15.
6.
Fig 7

Fig 7. Machine and deep learning results.. From: A hybrid human recognition framework using machine learning and deep neural networks.

(a) Machine learning results. (b) Deep learning results.

Abdullah M. Sheneamer, et al. PLoS One. 2024;19(6):e0300614.
7.
Fig 7

Fig 7. Confusion matrices.. From: A deep hybrid learning pipeline for accurate diagnosis of ovarian cancer based on nuclear morphology.

In this research work we have compared the Deep Hybrid Learning with both XGBoost and Random Forest variant, with a conventional Deep Neural Network (without transfer learning and having the same 21 layered CNN as DHL), DenseNet201 with transfer learning, InceptionNet v3 with transfer learning, ResNet50 with transfer learning and VGG16 with transfer learning. ‘Normal’ and ‘Cancer’ has been denoted as 0 and 1 respectively in the confusion matrices. 1. Deep Hybrid Learning with Random Forest 2. Deep Hybrid Learning with XGBoost 3. Conventional Deep Neural Network Model 4. DenseNet201 with transfer learning 5. ResNet50 with transfer learning 6. InceptionNetv3 with transfer learning 7. VGG16 with transfer learning.

Duhita Sengupta, et al. PLoS One. 2022;17(1):e0261181.
8.
Fig. S4

Fig. S4. From: Deep learning predictions of survival based on MRI in amyotrophic lateral sclerosis.

Fitted normal curves on accuracy distributions of seven networks. Normal curves are fitted on the accuracy distributions of the clinical deep learning network (orange), the structural connectivity deep learning network (blue), morphology deep learning network (purple), clinical-structural connectivity deep learning network (pink), clinical- morphology deep learning network (green), structural connectivity- morphology deep learning network (yellow) and the clinical-MRI combined deep learning network (gray), based on 16 randomly selected subjects (repeated 10,000 times) from the first evaluation set (n = 32). The mean accuracies (dashed lines) of these distributions were 68.7%, 62.5%, 62.4%, 78,0%, 81.2%, 78,1%, and 84.4% for the deep learning networks mentioned, respectively. A paired t-test showed significant differences between accuracies of each pair of networks (all p < 0.001).

Hannelore K. van der Burgh, et al. Neuroimage Clin. 2017;13:361-369.
9.
Fig. 11

Fig. 11. Deep learning in optical metrology.. From: Deep learning in optical metrology: a review.

Because of the significant changes that deep learning brings to the concept of optical metrology technology, almost all elementary tasks of digital image processing in optical metrology have been reformed by deep learning

Chao Zuo, et al. Light Sci Appl. 2022;11:39.
10.
Fig. 2

Fig. 2. All lesions with high disagreement between expert observers and between expert observers and the deep learning system.. From: Deep learning-based grading of ductal carcinoma in situ in breast histopathology images.

Lesions with the same letter come from the same patient. All lesions had an image size of 512 × 512 pixels except for (B2) where we show the middle 512 × 512 pixel patch. For lesion (A) the observers graded 2–3–2 and the deep learning system predicted grade 1. On final review in a consensus meeting, grades 1 and 3 did not seem justified, therefore the expert observers assigned this lesion as grade 2. For lesions (B1) and (B2) the observers graded 1–3–1 and the deep learning system predicted grade 1. Grade 3 did not seem justified during the consensus meeting and was an error by an expert observer. For lesion (C1) the observers graded 3–2–2 and the deep learning system predicted grade 1. For lesion (C2) the observers graded 3–2–3 and the deep learning system predicted grade 1. Both these lesions concern floaters and should not have been in the dataset. For lesion (D1) the observers graded 3–1–2 and the deep learning system predicted grade 2. For lesion (D2) the observers graded 3–1–3 and the deep learning system predicted grade 2. On review, both lesions are not obviously DCIS. For lesion (E1) the observers graded 2–1–1 and the deep learning system predicted grade 3. For lesions (E2) and (E3) the observers graded 2–2–1 and the deep learning system predicted grade 3. On review, these three lesions are not obviously DCIS.

Suzanne C. Wetstein, et al. Lab Invest. 2021;101(4):525-533.
11.
Figure 5

Figure 5. From: Multimodal deep learning models for early detection of Alzheimer’s disease stage.

Internal cross validation results for integration of data modalities to predict Alzheimer’s stage (a) Imaging + EHR + SNP. Deep learning prediction performs better than shallow learning predictions (b) EHR + SNP Deep learning prediction performs better than shallow learning predictions (c) Imaging + EHR deep learning prediction performs better than shallow learning predictions (d) Imaging + SNP results. Shallow learning gave a better prediction than deep learning due to small sample sizes. (kNN k-Nearest Neighbors, SVM support vector machines, RF random forests, SM shallow models, and DL deep learning).

Janani Venugopalan, et al. Sci Rep. 2021;11:3254.
12.
Fig. 3

Fig. 3. Comparison between shallow learning and deep learning in neural network.. From: Deep Learning in Medical Imaging: General Overview.

A. Typical deep learning neural network with 3 deep layers between input and output layers. B. Typical artificial neural network with 1 layer between input and output layers.

June-Goo Lee, et al. Korean J Radiol. 2017 Jul-Aug;18(4):570-584.
13.
Figure 6

Figure 6. From: Diagnosis of architectural distortion on digital breast tomosynthesis using radiomics and deep learning.

Examples of Grad-CAM maps of architectural distortion on DBT images, predicted by ResNet50 deep learning. (A) The RMLO view of a 61-year-old patient diagnosed with invasive ductal cancer. The BI-RADS score is 5. The radiomics score of the combined model is 0.72, and the probability predicted by deep learning is 0.54, both correctly diagnosing this case as malignant. (B) The RMLO view of a 42-year-old patient diagnosed with adenosis. The BI-RADS score is 4C. The radiomics score of the combined model is 0.48, and the probability predicted by deep learning is 0.52. The radiomics model makes a correct benign diagnosis, but deep learning gives a false-positive diagnosis. (C) The RMLO view of a 46-year-old patient diagnosed with fibroadenoma. The BI-RADS score is 4B. The radiomics score of the combined model is 0.41, and the probability predicted by deep learning is 0.51. The radiomics model makes a correct benign diagnosis, not deep learning. However, although deep learning does not give a correct diagnosis, it can localize the suspicious area.

Xiao Chen, et al. Front Oncol. 2022;12:991892.
14.
Figure 2

Figure 2. From: Deep learning model for diagnosing early gastric cancer using preoperative computed tomography images.

The workflow of deep learning and deep transfer learning (Pre-trained CNNs). DL, deep learning; DTL, deep transfer learning.

Qingwen Zeng, et al. Front Oncol. 2022;12:1065934.
15.
Figure 1.

Figure 1.Conventional and incremental deep learning workflows for cell tracking.. From: Tracking cell lineages in 3D by incremental deep learning.

(A) Schematic illustration of a typical deep learning workflow, starting with the annotation of imaging data to generate training datasets, training of deep learning models, prediction by deep learning and proofreading. (B) Schematic illustration of incremental learning with ELEPHANT. Imaging data are fed into a cycle of annotation, training, prediction, and proofreading to generate cell lineages. At each iteration, model parameters are updated and saved. This workflow applies to both detection and linking phases (see and ).

Ko Sugawara, et al. eLife. 2022;11:e69380.
16.
Figure 4

Figure 4. From: Deep learning to distinguish Best vitelliform macular dystrophy (BVMD) from adult-onset vitelliform macular degeneration (AVMD).

GradCAM output highlighting relevant features for each of the 4 deep learning classifiers. Upper left: deep learning classifier for unprocessed OCT images; Upper right: deep learning classifier for unprocessed BAF images; Lower left: deep learning classifier for OCT processed images; Lower right: deep learning classifier for BAF processed images. BAF blue autofluorescence, OCT optical coherence tomography.

Emanuele Crincoli, et al. Sci Rep. 2022;12:12745.
17.
Fig 1

Fig 1. Transition of the number of publications regarding machine learning or deep learning in PubMed.. From: Regulatory-approved deep learning/machine learning-based medical devices in Japan as of 2020: A systematic review.

(A) Transition of the number of machine-learning-related publications. The search query was (Machine learning) AND (("2015/1/1"[Date—Publication]: "2020/12/31"[Date—Publication])) (accessed at 2, February, 2021). (B) Transition of the number of deep-learning-related publications. The search query was (Deep learning) AND (("2015/1/1"[Date—Publication]: "2020/12/31"[Date—Publication])) (accessed at 2, February, 2021). Publications regarding machine learning or deep learning are rapidly increasing in PubMed.

Nao Aisu, et al. PLOS Digit Health. 2022 Jan;1(1):e0000001.
18.
Figure 2

Figure 2. From: Improving performance of deep learning models using 3.5D U-Net via majority voting for tooth segmentation on cone beam computed tomography.

Schematics of the 2.5D U-Net and our proposed 3.5D U-Nets using majority voting. The 2.5D U-Net combines the predictions of deep learning models trained by 2Da U-Net, 2Dc U-Net and 2Ds U-Net. The 3.5Dv3 U-Net combines the predictions of deep learning models trained by 2.5Dv U-Net, 2.5D U-Net and 3D U-Net. The 3.5Dv4 U-Net combines the predictions of deep learning models trained by 2Da U-Net, 2Dc U-Net, 2Ds U-Net and 3D U-Net. The 3.5Dv5 U-Net combines the predictions of deep learning models trained by 2Da U-Net, 2Dc U-Net, 2Ds U-Net, 2.5D U-Net and 3D U-Net.

Kang Hsu, et al. Sci Rep. 2022;12:19809.
19.
Fig. 1

Fig. 1. Timeline of publications in deep learning for medical imaging.. From: Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines.

Timeline showing growth in publications on deep learning for medical imaging, found by using the same search criteria on PubMed and Scopus. The figure shows that fusion has only constituted a small, but growing, subset of medical deep learning literature.

Shih-Cheng Huang, et al. NPJ Digit Med. 2020;3:136.
20.
Figure 1

Figure 1. From: GeneAI 3.0: powerful, novel, generalized hybrid and ensemble deep learning frameworks for miRNA species classification of stationary patterns from nucleotides.

Global architecture of GenAI 3.0 (AtheroPoint LLC, CA, USA). SML: Solo machine learning; EML: Ensemble machine learning; SDL: Solo deep learning; HDL: Hybrid deep learning; EDL: Ensemble deep learning.

Jaskaran Singh, et al. Sci Rep. 2024;14:7154.

Display Settings:

Items per page

Supplemental Content

Recent activity

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

See more...
Support Center