Deep learning for liver tumor diagnosis part II: convolutional neural network interpretation using radiologic imaging features

Eur Radiol. 2019 Jul;29(7):3348-3357. doi: 10.1007/s00330-019-06214-8. Epub 2019 May 15.

Abstract

Objectives: To develop a proof-of-concept "interpretable" deep learning prototype that justifies aspects of its predictions from a pre-trained hepatic lesion classifier.

Methods: A convolutional neural network (CNN) was engineered and trained to classify six hepatic tumor entities using 494 lesions on multi-phasic MRI, described in Part 1. A subset of each lesion class was labeled with up to four key imaging features per lesion. A post hoc algorithm inferred the presence of these features in a test set of 60 lesions by analyzing activation patterns of the pre-trained CNN model. Feature maps were generated that highlight regions in the original image that correspond to particular features. Additionally, relevance scores were assigned to each identified feature, denoting the relative contribution of a feature to the predicted lesion classification.

Results: The interpretable deep learning system achieved 76.5% positive predictive value and 82.9% sensitivity in identifying the correct radiological features present in each test lesion. The model misclassified 12% of lesions. Incorrect features were found more often in misclassified lesions than correctly identified lesions (60.4% vs. 85.6%). Feature maps were consistent with original image voxels contributing to each imaging feature. Feature relevance scores tended to reflect the most prominent imaging criteria for each class.

Conclusions: This interpretable deep learning system demonstrates proof of principle for illuminating portions of a pre-trained deep neural network's decision-making, by analyzing inner layers and automatically describing features contributing to predictions.

Key points: • An interpretable deep learning system prototype can explain aspects of its decision-making by identifying relevant imaging features and showing where these features are found on an image, facilitating clinical translation. • By providing feedback on the importance of various radiological features in performing differential diagnosis, interpretable deep learning systems have the potential to interface with standardized reporting systems such as LI-RADS, validating ancillary features and improving clinical practicality. • An interpretable deep learning system could potentially add quantitative data to radiologic reports and serve radiologists with evidence-based decision support.

Keywords: Artificial intelligence; Deep learning; Liver cancer.

MeSH terms

  • Adult
  • Aged
  • Algorithms
  • Bile Duct Neoplasms / diagnostic imaging
  • Bile Ducts, Intrahepatic
  • Carcinoma, Hepatocellular / diagnostic imaging*
  • Cholangiocarcinoma / diagnostic imaging
  • Deep Learning*
  • Female
  • Humans
  • Image Interpretation, Computer-Assisted / methods
  • Liver Neoplasms / diagnostic imaging*
  • Machine Learning
  • Magnetic Resonance Imaging / methods
  • Male
  • Middle Aged
  • Neural Networks, Computer*
  • Predictive Value of Tests
  • Proof of Concept Study
  • Retrospective Studies